Seminar by G. Volpe at the Department of Chemistry, University of Gothenburg, 19 Sep 2019

Soft Matter Meets Deep Learning
Giovanni Volpe
Department of Chemistry, University of Gothenburg, Sweden
19 September 2019

After a brief overview of artificial intelligence, machine learning and deep learning, I will present a series of recent works in which we have employed deep learning for applications in photonics and active matter. In particular, I will explain how we employed deep learning to enhance digital video microscopy [1], to estimate the properties of anomalous diffusion [2], and to improve the calculation of optical forces. Finally, I will provide an outlook for the application of deep learning in photonics and active matter.

References

[1] S. Helgadottir, A. Argun and G. Volpe, Digital video microscopy enhanced by deep learning. Optica 6(4), 506—513 (2019)
doi: 10.1364/OPTICA.6.000506

[2] S. Bo, F Schmidt, R Eichborn and G. Volpe, Measurement of Anomalous Diffusion Using Recurrent Neural Networks. arXiv: 1905.02038

Introductory Talk at CECAM Workshop “Active Matter and Artificial Intelligence”, Lausanne, Switzerland, 1 October 2019

Machine learning for active matter
Giovanni Volpe
Introductory Talk at CECAM Workshop “Active Matter and Artificial Intelligence”
CECAM-HQ-EPFL, Lausanne, Switzerland
30 September – 2 October, 2019

Data-driven machine-learning methods are more and more widely used in the natural sciences. Active-matter research is no exception and has recently started experiment- ing machine-learning approaches. Machine learning offers unprecedented opportunities, but it also poses unexpected practical and fundamental challenges. Most importantly, machine-learning methods often work as black boxes, and therefore it can be difficult to understand and interpret their results. Here, we present an overview of the current state of the art of the adoption of machine learning in active-matter research. Finally, we discuss the opportunities and challenges that are emerging, highlighting how active matter and machine learning can work together for mutual benefit.

Invited talk by G. Volpe at RIAO/Optilas 2019, Cancun, Mexico, 23 Sep 2019

Deep Learning Applications in Digital Video Microscopy and Optical Micromanipulation
Saga Helgadottir, Aykut Argun, Giovanni Volpe
Invited talk at RIAO/Optilas 2019, Cancun, Mexico, 23-27 September 2019

Since its introduction in the mid 90s, digital video microscopy has become a staple for the analysis of data in optical trapping and optical manipulation experiments [1]. Current methods are able to predict the location of the center of a particle in ideal condition with high accuracy. However, these methods fail as the signal-to-noise ratio (SNR) of the images decreases or if there are non-uniform distortions present in the images. Both these conditions are commonly encountered in experiments. In addition, all these methods require considerable user input in terms of analysis parameters, which introduces user bias. In order to automatize the tracking process algorithms using deep learning have been successfully introduced but have not proved to be usable for practical applications.

Here, we provide a fully automated deep learning tracking algorithm with sub-pixel precision in localizing single particle and multiple particles’ positions from image data [2]. We have developed a convolutional neural network that is pre-trained on simulated single particle images in varying conditions of, for example, particle intensity, image contrast and SNR.

We test the pre-trained network on an optically trapped particle both in ideal condition and challenged condition with low SNR and non-uniform distortions [3]. This pre-trained network accurately predicts the location the trapped particle and a comparison of detected trajectories, the distribution of the particle position and the power spectral density of the particle trajectory clearly shows that our algorithm outperforms tracking by radial symmetry [4]. Our algorithm is also able to track non-ideal images with multiple Brownian particles as well as swimming bacteria that are problematic for traditional methods.

In conclusion, our algorithm outperforms current methods in precision and speed of tracking non-ideal images, while eliminating the need for user supervision and therefore the introduction of user biases. 

References

[1] John C Crocker, David G Grier, Journal of Colloid and Interface Science 179, 298–310 (1996).

[2] Saga Helgadottir, Aykut Argun, Giovanni Volpe,Optica 6, 506–513 (2019).

[3] Philip H Jones, Onofrio M Maragò, Giovanni Volpe, Optical tweezers: Principles and applications. Cambridge University Press, 2015.

[4] Raghuveer Parthasarathy. Nature Methods 9724 (2012).

Presentation by Saga Helgadottir at the CECAM Workshop “Active Matter and Artificial Intelligence”, Lausanne, Switzerland, 30 September 2019

Digital video microscopy enhanced by deep learning

Saga Helgadottir, Aykut Argun & Giovanni Volpe
CECAM Workshop “Active Matter and Artificial Intelligence”, Lausanne, Switzerland
30 September 2019

Single particle tracking is essential in many branches of science and technology, from the measurement of biomolecular forces to the study of colloidal crystals. Standard methods rely on algorithmic approaches; by fine-tuning several user-defined parameters, these methods can be highly successful at tracking a well-defined kind of particle under low-noise conditions with constant and homogenous illumination. Here, we introduce an alternative data-driven approach based on a convolutional neural network, which we name DeepTrack. We show that DeepTrack outperforms algorithmic approaches, especially in the presence of noise and under poor illumination conditions. We use DeepTrack to track an optically trapped particle under very noisy and unsteady illumination conditions, where standard algorithmic approaches fail. We then demonstrate how DeepTrack can also be used to track multiple particles and non-spherical objects such as bacteria, also at very low signal-to-noise ratios. In order to make DeepTrack readily available for other users, we provide a Python software package, which can be easily personalized and optimized for specific applications.

Saga Helgadottir, Aykut Argun & Giovanni Volpe, Optica 6(4), 506—513 (2019)
doi: 10.1364/OPTICA.6.000506
arXiv: 1812.02653
GitHub: DeepTrack

03:40 PM–04:00 PM, Monday, September 30, 2019

Presentation by Saga Helgadottir at the AI for Health and Healthy AI conference, Gothenburg, Sweden, 30 August 2019

Digital video microscopy enhanced by deep learning

Saga Helgadottir, Aykut Argun & Giovanni Volpe
AI for Health and Healthy AI conference, Gothenburg, Sweden
30 August 2019

Single particle tracking is essential in many branches of science and technology, from the measurement of biomolecular forces to the study of colloidal crystals. Standard methods rely on algorithmic approaches; by fine-tuning several user-defined parameters, these methods can be highly successful at tracking a well-defined kind of particle under low-noise conditions with constant and homogenous illumination. Here, we introduce an alternative data-driven approach based on a convolutional neural network, which we name DeepTrack. We show that DeepTrack outperforms algorithmic approaches, especially in the presence of noise and under poor illumination conditions. We use DeepTrack to track an optically trapped particle under very noisy and unsteady illumination conditions, where standard algorithmic approaches fail. We then demonstrate how DeepTrack can also be used to track multiple particles and non-spherical objects such as bacteria, also at very low signal-to-noise ratios. In order to make DeepTrack readily available for other users, we provide a Python software package, which can be easily personalized and optimized for specific applications.

Friday, August 30, 2019

Saga Helgadottir, Aykut Argun & Giovanni Volpe, Optica 6(4), 506—513 (2019)
doi: 10.1364/OPTICA.6.000506
arXiv: 1812.02653
GitHub: DeepTrack

Seminar by G. Volpe at MTL BrainHack School 2019, Montreal, Canada, 22 August 2019

Be friendly to your users:
Add comments and tutorials to your code
Giovanni Volpe
MTL BrainHack School 2019, Montreal, 22 August 2019
https://brainhackmtl.github.io/school2019/

When releasing a software package, it is critical to provide potential users with all the information they need to help them using it.
Using the example of Braph — a software we recently developed to study brain connectivity http://braph.org/ —, I’ll illustrate how we have commented the code, created a website and off-line documentation, and recoded video tutorials.
I’ll conclude with some practical advice and some best practices.

Talk by G. Volpe at SPIE OTOM XVI, San Diego, 14 Aug 2019

FORMA: a high-performance algorithm for the calibration of optical tweezers
Laura Pérez-García, Alejandro V. Arzola, Jaime Donlucas Pérez, Giorgio Volpe  & Giovanni Volpe
SPIE Nanoscience + Engineering, Optical trapping and Optical Manipulation XV, San Diego (CA), USA
11-15 August 2019

We introduce a powerful algorithm (FORMA) for the calibration of optical tweezers. FORMA estimates accurately the conservative and non-conservative components of the force field with important advantages over established techniques, being parameter-free, requiring ten-fold less data and executing orders-of-magnitude faster. We demonstrate FORMA performance using optical tweezers, showing how, outperforming other available techniques, it can identify and characterise stable and unstable equilibrium points in generic force fields.

Reference: Pérez-García et al., Nature Communications 9, 5166 (2018)
doi: 10.1038/s41467-018-07437-x

Plenary Presentation by G. Volpe at SPIE Nanoscience + Engineering, San Diego, 12 Aug 2019

Optical forces go smart
Giovanni Volpe
Plenary Presentation
SPIE Nanoscience + Engineering, San Diego (CA), USA
11-15 August 2019

Optical forces have revolutionized nanotechnology. In particular, optical forces have been used to measure and exert femtonewton forces on nanoscopic objects. This has provided the essential tools to develop nanothermodynamics, to explore nanoscopic interactions such as critical Casimir forces, and to realize microscopic devices capable of autonomous operation. The future of optical forces now lies in the development of smarter experimental setups and data-analysis algorithms, partially empowered by the machine-learning revolution. This will open unprecedented possibilities, such as the study of the energy and information flows in nanothermodynamics systems, the design of novel forms of interactions between nanoparticles, and the realization of smart microscopic devices.

Invited talk by G. Volpe at MPI-PKS Workshop, Dresden, Germany, 23 July 2019

Deep Learning Applications in Photonics and Active Matter
Giovanni Volpe
Invited talk at the “
Microscale Motion and Light” MPI-PKS Workshop, Dresden, Germany, 22-26 July 2019
https://www.pks.mpg.de/mml19/

After a brief overview of artificial intelligence, machine learning and deep learning, I will present a series of recent works in which we have employed deep learning for applications in photonics and active matter. In particular, I will explain how we employed deep learning to enhance digital video microscopy [1], to estimate the properties of anomalous diffusion [2], and to improve the calculation of optical forces. Finally, I will provide an outlook for the application of deep learning in photonics and active matter.

References

[1] S. Helgadottir, A. Argun and G. Volpe, Digital video microscopy enhanced by deep learning. Optica 6(4), 506—513 (2019)
doi: 10.1364/OPTICA.6.000506

[2] S. Bo, F Schmidt, R. Eichborn and G. Volpe, Measurement of Anomalous Diffusion Using Recurrent Neural Networks. arXiv: 1905.02038