Extracting quantitative biological information from brightfield cell images using deep learning on ArXiv

Virtually-stained generated image for lipid-droplet.
Extracting quantitative biological information from brightfield cell images using deep learning
Saga Helgadottir, Benjamin Midtvedt, Jesús Pineda, Alan Sabirsh, Caroline B. Adiels, Stefano Romeo, Daniel Midtvedt, Giovanni Volpe
arXiv: 2012.12986

Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time-consuming, labor-intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of brightfield images by a conditional generative adversarial neural network (cGAN). We show that this approach can extract information from the brightfield images to generate virtually-stained images, which can be used in subsequent downstream quantitative analyses of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using brightfield images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually-stained images to extract quantitative measures about these cell structures. Generating virtually-stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell.

Improving epidemic testing and containment strategies using machine learning on ArXiv

Comparison of different evolution regimes of disease spreading: free evolution (bottom left half) vs network strategy (top right half).
Improving epidemic testing and containment strategies using machine learning
Laura Natali, Saga Helgadottir, Onofrio M. Maragò, Giovanni Volpe
arXiv: 2011.11717

Containment of epidemic outbreaks entails great societal and economic costs. Cost-effective containment strategies rely on efficiently identifying infected individuals, making the best possible use of the available testing resources. Therefore, quickly identifying the optimal testing strategy is of critical importance. Here, we demonstrate that machine learning can be used to identify which individuals are most beneficial to test, automatically and dynamically adapting the testing strategy to the characteristics of the disease outbreak. Specifically, we simulate an outbreak using the archetypal susceptible-infectious-recovered (SIR) model and we use data about the first confirmed cases to train a neural network that learns to make predictions about the rest of the population. Using these prediction, we manage to contain the outbreak more effectively and more quickly than with standard approaches. Furthermore, we demonstrate how this method can be used also when there is a possibility of reinfection (SIRS model) to efficiently eradicate an endemic disease.

Quantitative Digital Microscopy with Deep Learning on ArXiv

Particle tracking and characterization in terms of radius and refractive index.

Quantitative Digital Microscopy with Deep Learning
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt, Giovanni Volpe
arXiv: 2010.08260

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we introduce a software, DeepTrack 2.0, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.0 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Presentation by S. Helgadottir at the Gothenburg Science Festival, 2 October 2020

Logo of the Gothenburg Science Festival.

Saga Helgadottir will give a presentation at the Gothenburg Science Festival 2020.

The International Science Festival Gothenburg is one of Europe’s leading popular science events. Its first edition dates back to 1997, and it is held every year in spring.
This year the festival will take place during autumn, 28 September-4 October. Due to the current situation the festival will be a digital event. The digital festival will be available during the week of the festival.

The contribution of Saga Helgadottir will be presented according to the following schedule:

Saga Helgadottir
Deep Learning for Object Recognition
Deep Learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. In this talk, I will show how Deep Learning can be used to identify objects in images, in particular microscopic particles.

Date: 2 October 2020
Time: 18:08
Duration: 17′
Link: Deep Learning for Object Recognition

Links:
Vetenskapsfestivalen Göteborg (in Swedish)
The International Science Festival Gothenburg (in English)
Full Program

Diagnosis of a genetic disease improves with machine learning, a summary in Swedish published in Fysikaktuellt

Neural networks consist of a series of connected layers of neurons, whose connection weights are adjusted to learn how to determine the diagnosis from the input data.

A summary in Swedish of our previously published article “Virtual genetic diagnosis for familial hypercholesterolemia powered by machine learning” has been published in Fysikaktuellt, the journal of the Swedish Physical Society (Svenska fysikersamfundet).

Article: “Diagnostisering av sjukdomar förbättras med maskininlärning”, Saga Helgadottir, Giovanni Volpe and Stefano Romeo (in Swedish)

Original article: Virtual genetic diagnosis for familial hypercholesterolemia powered by machine learning

Press release: 
Algoritm lär sig diagnostisera genetisk sjukdom (in Swedish)
An algorithm that learns to diagnose genetic disease (in English)

Soft Matter Lab presentations at the SPIE Optics+Photonics Digital Forum

Seven members of the Soft Matter Lab (Saga HelgadottirBenjamin Midtvedt, Aykut Argun, Laura Pérez-GarciaDaniel MidtvedtHarshith BachimanchiEmiliano Gómez) were selected for oral and poster presentations at the SPIE Optics+Photonics Digital Forum, August 24-28, 2020.

The SPIE digital forum is a free, online only event.
The registration for the Digital Forum includes access to all presentations and proceedings.

The Soft Matter Lab contributions are part of the SPIE Nanoscience + Engineering conferences, namely the conference on Emerging Topics in Artificial Intelligence 2020 and the conference on Optical Trapping and Optical Micromanipulation XVII.

The contributions being presented are listed below, including also the presentations co-authored by Giovanni Volpe.

Note: the presentation times are indicated according to PDT (Pacific Daylight Time) (GMT-7)

Emerging Topics in Artificial Intelligence 2020

Saga Helgadottir
Digital video microscopy with deep learning (Invited Paper)
26 August 2020, 10:30 AM
SPIE Link: here.

Aykut Argun
Calibration of force fields using recurrent neural networks
26 August 2020, 8:30 AM
SPIE Link: here.

Laura Pérez-García
Deep-learning enhanced light-sheet microscopy
25 August 2020, 9:10 AM
SPIE Link: here.

Daniel Midtvedt
Holographic characterization of subwavelength particles enhanced by deep learning
24 August 2020, 2:40 PM
SPIE Link: here.

Benjamin Midtvedt
DeepTrack: A comprehensive deep learning framework for digital microscopy
26 August 2020, 11:40 AM
SPIE Link: here.

Gorka Muñoz-Gil
The anomalous diffusion challenge: Single trajectory characterisation as a competition
26 August 2020, 12:00 PM
SPIE Link: here.

Meera Srikrishna
Brain tissue segmentation using U-Nets in cranial CT scans
25 August 2020, 2:00 PM
SPIE Link: here.

Juan S. Sierra
Automated corneal endothelium image segmentation in the presence of cornea guttata via convolutional neural networks
26 August 2020, 11:50 AM
SPIE Link: here.

Harshith Bachimanchi
Digital holographic microscopy driven by deep learning: A study on marine planktons (Poster)
24 August 2020, 5:30 PM
SPIE Link: here.

Emiliano Gómez
BRAPH 2.0: Software for the analysis of brain connectivity with graph theory (Poster)
24 August 2020, 5:30 PM
SPIE Link: here.

Optical Trapping and Optical Micromanipulation XVII

Laura Pérez-García
Reconstructing complex force fields with optical tweezers
24 August 2020, 5:00 PM
SPIE Link: here.

Alejandro V. Arzola
Direct visualization of the spin-orbit angular momentum conversion in optical trapping
25 August 2020, 10:40 AM
SPIE Link: here.

Isaac Lenton
Illuminating the complex behaviour of particles in optical traps with machine learning
26 August 2020, 9:10 AM
SPIE Link: here.

Fatemeh Kalantarifard
Optical trapping of microparticles and yeast cells at ultra-low intensity by intracavity nonlinear feedback forces
24 August 2020, 11:10 AM
SPIE Link: here.

Note: the presentation times are indicated according to PDT (Pacific Daylight Time) (GMT-7)

Digital video microscopy with deep learning

Digital video microscopy with deep learning
Saga Helgadottir
(Invited paper)

Microscopic particle tracking has had a long history of providing insight and breakthroughs within the physical and biological sciences, starting with Jean Perrin proved the existens of atoms in 1910 by projecting images of microscopic colloidal particles onto a sheet of paper and manually tracking their displacements. From the start of digital video microscopy over 20 years ago, automated single particle tracking algorithms have followed a similar pattern: pre-processing of the image to reduce noise, segmentation of the image to identify the features of interest, refining of these feature coordinates to sub-pixel accuracy and linking of the feature coordinates over several images to construct particle trajectories. By fine-tuning several user-defined parameters, these methods can be highly successful at tracking a well-defined kind of particle in good imaging conditions. However, their performance degrades severely at unsteady imaging conditions.
To overcome the limitations of traditional algorithmic approaches, data-driven methods using deep learning have been introduced. Deep-learning algorithms based on convolutional neural networks have been shown to accurately localize holographic colloidal particles and fluorescent biological objects. We have recently developed DeepTrack, a software package based on a convolutional neural network that outperforms algorithmic approaches in tracking colloidal particles as well as non spherical biological objects, especially in the presence of noise and under poor illumination conditions.
In this talk I will give an overview of the history of particle tracking before explaining the details of our solution DeepTrack and finally give an outlook on the field of deep learning in microscopy.

Time and place: Presentation published online on 24 August 2020
SPIE Link: here.

DeepTrack: A comprehensive deep learning framework for digital microscopy

DeepTrack: A comprehensive deep learning framework for digital microscopy
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Daniel Midtvedt, Giovanni Volpe
Click here to see the slides.

Despite the rapid advancement of deep learningmethods for image analysis, they remain under-utilized for the analysis of digital microscopy images. State of the artmethods require expertise in deep learning to implement, disconnecting the development of new methods from end-users. The packages that are available are typically highly specialized, diicult to reappropriate and almost impossible to interface with other methods. Finally, it is prohibitively difficult to procure representative datasets with corresponding labels. DeepTrack is a deep learning framework targeting optical microscopy, designed to account for each of these issues. Firstly, it is packaged with an easy-to-use graphical user interface, solving standard microscopy problems with no required programming experience. Secondly, it provides a comprehensive programming API for creating representative synthetic data, designed to exactly suit the problem. DeepTrack images samples of refractive index or flourophore distributions using physical simulations of customizable optical systems. To accurately represent the data to be analyzed, DeepTrack supports arbitrary optical aberration and experimental noise. Thirdly, many standard deep learning methods are packaged with DeepTrack, including architectures such as U-NET, and regularization techniques such as augmentations. Finally, the framework is fully modular and easily extendable to implement new methods, providing both longevity and a centralized foundation to deploy new deep learning solutions. To demonstrate the versatility of the framework,we show a few typical use-cases, including cell-counting in dense biological samples, extracting 3-dimensional tracks from 2-dimensional videos, and distinguishing and tracking microorganisms in bright-field videos.

Poster Session
Time: June 22nd 2020
Place: Twitter and virtual reality

POM Conference
Link: 
POM
Time: June 25th 2020
Place: Online

Poster Slides

Saga Helgadottir – POM Poster – Page 1
Saga Helgadottir – POM Poster – Page 2
Saga Helgadottir – POM Poster – Page 3
Saga Helgadottir – POM Poster – Page 4

Soft Matter Lab presentations at the Photonics Online Meet-up, 22 June 2020

Six members of the Soft Matter Lab (Aykut Argun, Falko Schmidt, Laura Pérez-Garcia, Saga Helgadottir, Alessandro Magazzù, Daniel Midtvedt) were selected for poster presentations at the Photonics Online Meet-up (POM).

POM is an entirely free virtual conference. It aims to bring together a community of early career and established researchers from universities, industry, and government working in optics and photonics.

The meeting, at its second edition, will be held on June 25th 2020, 9-14.30 Central European Time. The virtual poster session will take place on June 22nd, on Twitter and virtual reality.

The poster contributions being presented are:

Aykut Argun
Enhanced force-field calibration via machine learning
Twitter Link: here.

Falko Schmidt
Dynamics of an active nanoparticle in an optical trap
Twitter Link: here.

Laura Pérez-García
Optical force field reconstruction using Brownian trajectories
Twitter Link: here.

Saga Helgadottir
DeepTrack: A comprehensive deep learning framework for digital microscopy
Twitter Link: here.

Alessandro Magazzù
Controlling the dynamics of colloidal particles by critical Casimir forces
Twitter Link: here.

Daniel Midtvedt
Holographic characterisation of subwavelength particles enhanced by deep learning
Twitter Link: here.

Link: Photonics Online Meet-up (POM)

Presentation by S. Helgadottir at SAIS Workshop, 17 June 2020

Saga Helgadottir will give a presentation at the 32nd annual workshop of the Swedish Artificial Intelligence Society (SAIS), that will be held as an online conference on June 16 – 17, 2020.

The SAIS workshop is a forum for building the Swedish AI research community and nurture networks across academia and industry. Because of the concern for the COVID-19, the workshop this year is an online conference.

The contributions of Saga Helgadottir will be presented according to the following schedule:

Saga Helgadottir
Medical Diagnosis with Machine Learning
Date: 17 June 2020
Time: 15:00 CEST

Link: SAIS Workshop 2020 program