Press release on Active Droploids

The article Active Droploids has been featured in a press release of the University of Gothenburg.

The study, published in Nature Communications, examines a special system of colloidal particles and demonstrates a new kind of active matter, which interacts with and modifies its environment. In the long run, the result of the study can be used for drug delivery inside the human body or to perform sensing of environmental pollutants and their clean-up.

Here the links to the press releases:
English: Feedback creates a new class of active biomimetic materials.
Swedish: Feedback möjliggör en ny form av aktiva biomimetiska material.

The article has been features also in Mirage News, Science Daily, Phys.org, Innovations Report, Informationsdienst Wissenschaft (idw) online, Nanowerk.

Active droploids published in Nature Communications

Active droploids. (Image taken from the article.)
Active droploids
Jens Grauer, Falko Schmidt, Jesús Pineda, Benjamin Midtvedt, Hartmut Löwen, Giovanni Volpe & Benno Liebchen
Nat. Commun. 12, 6005 (2021)
doi: 10.1038/s41467-021-26319-3
arXiv: 2109.10677

Active matter comprises self-driven units, such as bacteria and synthetic microswimmers, that can spontaneously form complex patterns and assemble into functional microdevices. These processes are possible thanks to the out-of-equilibrium nature of active-matter systems, fueled by a one-way free-energy flow from the environment into the system. Here, we take the next step in the evolution of active matter by realizing a two-way coupling between active particles and their environment, where active particles act back on the environment giving rise to the formation of superstructures. In experiments and simulations we observe that, under light-illumination, colloidal particles and their near-critical environment create mutually-coupled co-evolving structures. These structures unify in the form of active superstructures featuring a droplet shape and a colloidal engine inducing self-propulsion. We call them active droploids—a portmanteau of droplet and colloids. Our results provide a pathway to create active superstructures through environmental feedback.

Press release on Extracting quantitative biological information from bright-field cell images using deep learning

Virtually-stained generated image for lipid-droplet.

The article Extracting quantitative biological information from bright-field cell images using deep learning has been featured in a press release of the University of Gothenburg.

The study, recently published in Biophysics Reviews, shows how artificial intelligence can be used to develop faster, cheaper and more reliable information about cells, while also eliminating the disadvantages from using chemicals in the process.

Here the links to the press releases on Cision:
Swedish: Effektivare studier av celler med ny AI-metod
English: More effective cell studies using new AI method

Here the links to the press releases in the News of the University of Gothenburg:
Swedish: Effektivare studier av celler med ny AI-metod
English: More effective cell studies using new AI method

Extracting quantitative biological information from brightfield cell images using deep learning featured in AIP SciLight

The article Extracting quantitative biological information from brightfield cell images using deep learning
has been featured in: “Staining Cells Virtually Offers Alterative Approach to Chemical Dyes”, AIP SciLight (July 23, 2021).

Scilight showcases the most interesting research across the physical sciences published in AIP Publishing Journals.

Scilight is published weekly (52 issues per year) by AIP Publishing.

Extracting quantitative biological information from bright-field cell images using deep learning published in Biophysics Reviews

Virtually-stained generated image for lipid-droplet.
Extracting quantitative biological information from bright-field cell images using deep learning
Saga Helgadottir, Benjamin Midtvedt, Jesús Pineda, Alan Sabirsh, Caroline B. Adiels, Stefano Romeo, Daniel Midtvedt, Giovanni Volpe
Biophysics Rev. 2, 031401 (2021)
arXiv: 2012.12986
doi: 10.1063/5.0044782

Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time-consuming, labor-intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of bright-field images by a conditional generative adversarial neural network (cGAN). We show that this approach can extract information from the bright-field images to generate virtually-stained images, which can be used in subsequent downstream quantitative analyses of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using bright-field images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually-stained images to extract quantitative measures about these cell structures. Generating virtually-stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell.

Quantitative Digital Microscopy with Deep Learning published in Applied Physics Reviews

Particle tracking and characterization in terms of radius and refractive index.

Quantitative Digital Microscopy with Deep Learning
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt, Giovanni Volpe
Applied Physics Reviews 8, 011310 (2021)
doi: 10.1063/5.0034891
arXiv: 2010.08260

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we introduce a software, DeepTrack 2.0, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.0 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Fast and Accurate Nanoparticle Characterization Using Deep-Learning-Enhanced Off-Axis Holography published in ACS Nano

Phase and amplitude signals from representative particles for testing the performance of the Deep-learning approach

Fast and Accurate Nanoparticle Characterization Using Deep-Learning-Enhanced Off-Axis Holography
Benjamin Midtvedt, Erik Olsén, Fredrik Eklund, Fredrik Höök, Caroline Beck Adiels, Giovanni Volpe, Daniel Midtvedt
ACS Nano 15(2), 2240–2250 (2021)
doi: 10.1021/acsnano.0c06902
arXiv: 2006.11154

The characterisation of the physical properties of nanoparticles in their native environment plays a central role in a wide range of fields, from nanoparticle-enhanced drug delivery to environmental nanopollution assessment. Standard optical approaches require long trajectories of nanoparticles dispersed in a medium with known viscosity to characterise their diffusion constant and, thus, their size. However, often only short trajectories are available, while the medium viscosity is unknown, e.g., in most biomedical applications. In this work, we demonstrate a label-free method to quantify size and refractive index of individual subwavelength particles using two orders of magnitude shorter trajectories than required by standard methods, and without assumptions about the physicochemical properties of the medium. We achieve this by developing a weighted average convolutional neural network to analyse the holographic images of the particles. As a proof of principle, we distinguish and quantify size and refractive index of silica and polystyrene particles without prior knowledge of solute viscosity or refractive index. As an example of an application beyond the state of the art, we demonstrate how this technique can monitor the aggregation of polystyrene nanoparticles, revealing the time-resolved dynamics of the monomer number and fractal dimension of individual subwavelength aggregates. This technique opens new possibilities for nanoparticle characterisation with a broad range of applications from biomedicine to environmental monitoring.

Soft Matter Lab presentations at the SPIE Optics+Photonics Digital Forum

Seven members of the Soft Matter Lab (Saga HelgadottirBenjamin Midtvedt, Aykut Argun, Laura Pérez-GarciaDaniel MidtvedtHarshith BachimanchiEmiliano Gómez) were selected for oral and poster presentations at the SPIE Optics+Photonics Digital Forum, August 24-28, 2020.

The SPIE digital forum is a free, online only event.
The registration for the Digital Forum includes access to all presentations and proceedings.

The Soft Matter Lab contributions are part of the SPIE Nanoscience + Engineering conferences, namely the conference on Emerging Topics in Artificial Intelligence 2020 and the conference on Optical Trapping and Optical Micromanipulation XVII.

The contributions being presented are listed below, including also the presentations co-authored by Giovanni Volpe.

Note: the presentation times are indicated according to PDT (Pacific Daylight Time) (GMT-7)

Emerging Topics in Artificial Intelligence 2020

Saga Helgadottir
Digital video microscopy with deep learning (Invited Paper)
26 August 2020, 10:30 AM
SPIE Link: here.

Aykut Argun
Calibration of force fields using recurrent neural networks
26 August 2020, 8:30 AM
SPIE Link: here.

Laura Pérez-García
Deep-learning enhanced light-sheet microscopy
25 August 2020, 9:10 AM
SPIE Link: here.

Daniel Midtvedt
Holographic characterization of subwavelength particles enhanced by deep learning
24 August 2020, 2:40 PM
SPIE Link: here.

Benjamin Midtvedt
DeepTrack: A comprehensive deep learning framework for digital microscopy
26 August 2020, 11:40 AM
SPIE Link: here.

Gorka Muñoz-Gil
The anomalous diffusion challenge: Single trajectory characterisation as a competition
26 August 2020, 12:00 PM
SPIE Link: here.

Meera Srikrishna
Brain tissue segmentation using U-Nets in cranial CT scans
25 August 2020, 2:00 PM
SPIE Link: here.

Juan S. Sierra
Automated corneal endothelium image segmentation in the presence of cornea guttata via convolutional neural networks
26 August 2020, 11:50 AM
SPIE Link: here.

Harshith Bachimanchi
Digital holographic microscopy driven by deep learning: A study on marine planktons (Poster)
24 August 2020, 5:30 PM
SPIE Link: here.

Emiliano Gómez
BRAPH 2.0: Software for the analysis of brain connectivity with graph theory (Poster)
24 August 2020, 5:30 PM
SPIE Link: here.

Optical Trapping and Optical Micromanipulation XVII

Laura Pérez-García
Reconstructing complex force fields with optical tweezers
24 August 2020, 5:00 PM
SPIE Link: here.

Alejandro V. Arzola
Direct visualization of the spin-orbit angular momentum conversion in optical trapping
25 August 2020, 10:40 AM
SPIE Link: here.

Isaac Lenton
Illuminating the complex behaviour of particles in optical traps with machine learning
26 August 2020, 9:10 AM
SPIE Link: here.

Fatemeh Kalantarifard
Optical trapping of microparticles and yeast cells at ultra-low intensity by intracavity nonlinear feedback forces
24 August 2020, 11:10 AM
SPIE Link: here.

Note: the presentation times are indicated according to PDT (Pacific Daylight Time) (GMT-7)

Benjamin Midtvedt joins the Soft Matter Lab

Benjamin Midtvedt starts his PhD at the Physics Department of the University of Gothenburg on 1st July 2020.

Benjamin has a Master degree in Engineering Mathematics and Computer Science at Chalmers University of Technology.

In his PhD, he will focus on using deep learning to design particle behaviour when interacting with light.

Benjamin Midtvedt defended his Master Thesis on June 15, 2020. Congrats!

Benjamin Midtvedt defended his Master Thesis in Engineering Mathematics and Computer Science at Chalmers University of Technology on 15 June 2020. Congrats!

Screenshot of Benjamin Midtvedt’s Master Thesis defence.
Title: DeepTrack: A comprehensive deep learning framework for digital microscopy

Despite the rapid advancement of deep-learning methods for image analysis, they remain underutilized for the analysis of microscopy images. State of the art methods require expertise in deep-learning to implement, disconnecting the development of new methods from end-users. The packages that are available are typically highly specialized, challenging to reappropriate, and almost impossible to interface with other methods. Finally, training deep-learning models often requires large datasets of manually annotated images, making it prohibitively difficult to procure training data that accurately represents the problem.

DeepTrack is a deep-learning framework targeting optical microscopy, designed to account for each of these issues. Firstly, it is packaged with an easy-to-use graphical user interface, solving standard microscopy problems with no required programming experience. Secondly, it bypasses the need for manually annotated experimental data by providing a comprehensive programming API for creating representative synthetic data, designed to exactly suit the problem. DeepTrack creates physical simulations of samples described by refractive index or fluorophore distributions, using fully customizable optical systems. To accurately represent the data to be analyzed, DeepTrack supports arbitrary optical aberration and experimental noise. Thirdly, many standard deep-learning methods are packaged with DeepTrack, including architectures such as U-NET, and regularization techniques such as augmentations, decreasing the barrier to entry. Finally, the framework is fully modular and easily extendable to implement new methods, providing both longevity and a centralized foundation to deploy new deep-learning solutions.

We demonstrate the versatility of DeepTrack by training networks to solve a broad range of common microscopy problems, including particle tracking, cell-counting in dense biological samples, multi-particle 3-dimensional tracking, and cell segmentation and classification.

Master Programme: Engineering Mathematics and Computer Science
Supervisor: Giovanni Volpe
Examiner: Giovanni Volpe
Opponents: Aykut Argun and Saga Helgadóttir

Time: 15 June 2020, 16:00
Place: Online via Zoom