Seminar by G. Volpe at MTL BrainHack School 2019, Montreal, Canada, 22 August 2019

Be friendly to your users:
Add comments and tutorials to your code
Giovanni Volpe
MTL BrainHack School 2019, Montreal, 22 August 2019
https://brainhackmtl.github.io/school2019/

When releasing a software package, it is critical to provide potential users with all the information they need to help them using it.
Using the example of Braph — a software we recently developed to study brain connectivity http://braph.org/ —, I’ll illustrate how we have commented the code, created a website and off-line documentation, and recoded video tutorials.
I’ll conclude with some practical advice and some best practices.

Talk by G. Volpe at SPIE OTOM XVI, San Diego, 14 Aug 2019

FORMA: a high-performance algorithm for the calibration of optical tweezers
Laura Pérez-García, Alejandro V. Arzola, Jaime Donlucas Pérez, Giorgio Volpe  & Giovanni Volpe
SPIE Nanoscience + Engineering, Optical trapping and Optical Manipulation XV, San Diego (CA), USA
11-15 August 2019

We introduce a powerful algorithm (FORMA) for the calibration of optical tweezers. FORMA estimates accurately the conservative and non-conservative components of the force field with important advantages over established techniques, being parameter-free, requiring ten-fold less data and executing orders-of-magnitude faster. We demonstrate FORMA performance using optical tweezers, showing how, outperforming other available techniques, it can identify and characterise stable and unstable equilibrium points in generic force fields.

Reference: Pérez-García et al., Nature Communications 9, 5166 (2018)
doi: 10.1038/s41467-018-07437-x

Presentation by Saga Helgadottir at the AI for Health and Healthy AI conference, Gothenburg, Sweden, 30 August 2019

Digital video microscopy enhanced by deep learning

Saga Helgadottir, Aykut Argun & Giovanni Volpe
AI for Health and Healthy AI conference, Gothenburg, Sweden
30 August 2019

Single particle tracking is essential in many branches of science and technology, from the measurement of biomolecular forces to the study of colloidal crystals. Standard methods rely on algorithmic approaches; by fine-tuning several user-defined parameters, these methods can be highly successful at tracking a well-defined kind of particle under low-noise conditions with constant and homogenous illumination. Here, we introduce an alternative data-driven approach based on a convolutional neural network, which we name DeepTrack. We show that DeepTrack outperforms algorithmic approaches, especially in the presence of noise and under poor illumination conditions. We use DeepTrack to track an optically trapped particle under very noisy and unsteady illumination conditions, where standard algorithmic approaches fail. We then demonstrate how DeepTrack can also be used to track multiple particles and non-spherical objects such as bacteria, also at very low signal-to-noise ratios. In order to make DeepTrack readily available for other users, we provide a Python software package, which can be easily personalized and optimized for specific applications.

Friday, August 30, 2019

Saga Helgadottir, Aykut Argun & Giovanni Volpe, Optica 6(4), 506—513 (2019)
doi: 10.1364/OPTICA.6.000506
arXiv: 1812.02653
GitHub: DeepTrack

Presentation by Saga Helgadottir at the CECAM Workshop “Active Matter and Artificial Intelligence”, Lausanne, Switzerland, 30 September 2019

Digital video microscopy enhanced by deep learning

Saga Helgadottir, Aykut Argun & Giovanni Volpe
CECAM Workshop “Active Matter and Artificial Intelligence”, Lausanne, Switzerland
30 September 2019

Single particle tracking is essential in many branches of science and technology, from the measurement of biomolecular forces to the study of colloidal crystals. Standard methods rely on algorithmic approaches; by fine-tuning several user-defined parameters, these methods can be highly successful at tracking a well-defined kind of particle under low-noise conditions with constant and homogenous illumination. Here, we introduce an alternative data-driven approach based on a convolutional neural network, which we name DeepTrack. We show that DeepTrack outperforms algorithmic approaches, especially in the presence of noise and under poor illumination conditions. We use DeepTrack to track an optically trapped particle under very noisy and unsteady illumination conditions, where standard algorithmic approaches fail. We then demonstrate how DeepTrack can also be used to track multiple particles and non-spherical objects such as bacteria, also at very low signal-to-noise ratios. In order to make DeepTrack readily available for other users, we provide a Python software package, which can be easily personalized and optimized for specific applications.

Saga Helgadottir, Aykut Argun & Giovanni Volpe, Optica 6(4), 506—513 (2019)
doi: 10.1364/OPTICA.6.000506
arXiv: 1812.02653
GitHub: DeepTrack

03:40 PM–04:00 PM, Monday, September 30, 2019

Plenary Presentation by G. Volpe at SPIE Nanoscience + Engineering, San Diego, 12 Aug 2019

Optical forces go smart
Giovanni Volpe
Plenary Presentation
SPIE Nanoscience + Engineering, San Diego (CA), USA
11-15 August 2019

Optical forces have revolutionized nanotechnology. In particular, optical forces have been used to measure and exert femtonewton forces on nanoscopic objects. This has provided the essential tools to develop nanothermodynamics, to explore nanoscopic interactions such as critical Casimir forces, and to realize microscopic devices capable of autonomous operation. The future of optical forces now lies in the development of smarter experimental setups and data-analysis algorithms, partially empowered by the machine-learning revolution. This will open unprecedented possibilities, such as the study of the energy and information flows in nanothermodynamics systems, the design of novel forms of interactions between nanoparticles, and the realization of smart microscopic devices.