Deep-learning-powered data analysis in plankton ecology published in Limnology and Oceanography Letters

Segmentation of two plankton species using deep learning (N. scintillans in blue, D. tertiolecta in green). (Image by H. Bachimanchi.)
Deep-learning-powered data analysis in plankton ecology
Harshith Bachimanchi, Matthew I. M. Pinder, Chloé Robert, Pierre De Wit, Jonathan Havenhand, Alexandra Kinnby, Daniel Midtvedt, Erik Selander, Giovanni Volpe
Limnology and Oceanography Letters (2024)
doi: 10.1002/lol2.10392
arXiv: 2309.08500

The implementation of deep learning algorithms has brought new perspectives to plankton ecology. Emerging as an alternative approach to established methods, deep learning offers objective schemes to investigate plankton organisms in diverse environments. We provide an overview of deep-learning-based methods including detection and classification of phytoplankton and zooplankton images, foraging and swimming behavior analysis, and finally ecological modeling. Deep learning has the potential to speed up the analysis and reduce the human experimental bias, thus enabling data acquisition at relevant temporal and spatial scales with improved reproducibility. We also discuss shortcomings and show how deep learning architectures have evolved to mitigate imprecise readouts. Finally, we suggest opportunities where deep learning is particularly likely to catalyze plankton research. The examples are accompanied by detailed tutorials and code samples that allow readers to apply the methods described in this review to their own data.

Giovanni Volpe awarded the Göran Gustafsson prize

(Photo by Johan Wingborg.)
Giovanni Volpe was awarded one of Sweden’s most prestigious prizes for physics, the Göran Gustafsson Prize, which is handed out by the Göran Gustafsson Foundation with the help of the Royal Swedish Academy of Sciences. Giovanni receives the physics prize “for boundary breaking research focusing on microscopic particles with active functions”. The prize sum is 6.3 million SEK.

More details here:
Press release of Gothenburg University: Giovanni Volpe receives prestigious Göran Gustafsson prize
Press release of Kungl. Vetenskapsakademien: 33 miljoner till forskning om bland annat TBE och smarta mikropartiklar

Dual-Angle Interferometric Scattering Microscopy for Optical Multiparametric Particle Characterization published in Nano Letters

Conceptual schematic of dual-angle interferometric scattering microscopy (DAISY). (Image by the Authors of the manuscript.)
Dual-Angle Interferometric Scattering Microscopy for Optical Multiparametric Particle Characterization
Erik Olsén, Berenice García Rodríguez, Fredrik Skärberg, Petteri Parkkila, Giovanni Volpe, Fredrik Höök, and Daniel Sundås Midtvedt
Nano Letters (2024)
arXiv: 2309.07572
doi: 10.1021/acs.nanolett.3c03539

Traditional single-nanoparticle sizing using optical microscopy techniques assesses size via the diffusion constant, which requires suspended particles to be in a medium of known viscosity. However, these assumptions are typically not fulfilled in complex natural sample environments. Here, we introduce dual-angle interferometric scattering microscopy (DAISY), enabling optical quantification of both size and polarizability of individual nanoparticles (radius <170 nm) without requiring a priori information regarding the surrounding media or super-resolution imaging. DAISY achieves this by combining the information contained in concurrently measured forward and backward scattering images through twilight off-axis holography and interferometric scattering (iSCAT). Going beyond particle size and polarizability, single-particle morphology can be deduced from the fact that the hydrodynamic radius relates to the outer particle radius, while the scattering-based size estimate depends on the internal mass distribution of the particles. We demonstrate this by differentiating biomolecular fractal aggregates from spherical particles in fetal bovine serum at the single-particle level.

Connecting genomic results for psychiatric disorders to human brain cell types and regions reveals convergence with functional connectivity on medRxiv

Brain region connectivity. (Image by the Authors of the manuscript.)
Connecting genomic results for psychiatric disorders to human brain cell types and regions reveals convergence with functional connectivity
Shuyang Yao, Arvid Harder, Fahimeh Darki, Yu-Wei Chang , Ang Li, Kasra Nikouei, Giovanni Volpe, Johan N Lundström, Jian Zeng , Naomi Wray, Yi Lu, Patrick F Sullivan, Jens Hjerling-Leffler
medRxiv: 10.1101/2024.01.18.24301478

Understanding the temporal and spatial brain locations etiological for psychiatric disorders is essential for targeted neurobiological research. Integration of genomic insights from genome-wide association studies with single-cell transcriptomics is a powerful approach although past efforts have necessarily relied on mouse atlases. Leveraging a comprehensive atlas of the adult human brain, we prioritized cell types via the enrichment of SNP-heritabilities for brain diseases, disorders, and traits, progressing from individual cell types to brain regions. Our findings highlight specific neuronal clusters significantly enriched for the SNP-heritabilities for schizophrenia, bipolar disorder, and major depressive disorder along with intelligence, education, and neuroticism. Extrapolation of cell-type results to brain regions reveals important patterns for schizophrenia with distinct subregions in the hippocampus and amygdala exhibiting the highest significance. Cerebral cortical regions display similar enrichments despite the known prefrontal dysfunction in those with schizophrenia highlighting the importance of subcortical connectivity. Using functional MRI connectivity from cases with schizophrenia and neurotypical controls, we identified brain networks that distinguished cases from controls that also confirmed involvement of the central and lateral amygdala, hippocampal body, and prefrontal cortex. Our findings underscore the value of single-cell transcriptomics in decoding the polygenicity of psychiatric disorders and offer a promising convergence of genomic, transcriptomic, and brain imaging modalities toward common biological targets.

Nanoalignment by Critical Casimir Torques on ArXiv

Artist rendition of a disk-shaped microparticle trapped above a circular uncoated pattern within a thin gold layer coated on a glass surface. (Image by the Authors of the manuscript.)
Nanoalignment by Critical Casimir Torques
Gan Wang, Piotr Nowakowski, Nima Farahmand Bafi, Benjamin Midtvedt, Falko Schmidt, Ruggero Verre, Mikael Käll, S. Dietrich, Svyatoslav Kondrat, Giovanni Volpe
arXiv: 2401.06260

The manipulation of microscopic objects requires precise and controllable forces and torques. Recent advances have led to the use of critical Casimir forces as a powerful tool, which can be finely tuned through the temperature of the environment and the chemical properties of the involved objects. For example, these forces have been used to self-organize ensembles of particles and to counteract stiction caused by Casimir-Liftshitz forces. However, until now, the potential of critical Casimir torques has been largely unexplored. Here, we demonstrate that critical Casimir torques can efficiently control the alignment of microscopic objects on nanopatterned substrates. We show experimentally and corroborate with theoretical calculations and Monte Carlo simulations that circular patterns on a substrate can stabilize the position and orientation of microscopic disks. By making the patterns elliptical, such microdisks can be subject to a torque which flips them upright while simultaneously allowing for more accurate control of the microdisk position. More complex patterns can selectively trap 2D-chiral particles and generate particle motion similar to non-equilibrium Brownian ratchets. These findings provide new opportunities for nanotechnological applications requiring precise positioning and orientation of microscopic objects.

Colloquium by G. Volpe at the Mini-Symposium with Giovanni Volpe and Pawel Sikorski, Lund, 11 January 2024

(Image by A. Argun)
Deep Learning for Imaging and Microscopy
Giovanni Volpe
Mini-Symposium with Giovanni Volpe and Pawel Sikorski, Lund, Sweden, 11 January 2024
Date: 11 January 2024
Time: 15:15

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.

Accelerating Plasmonic Hydrogen Sensors for Inert Gas Environments by Transformer-Based Deep Learning on ArXiv

Schematic illustration of the plasmonic H2 sensing principle, where the sorption of hydrogen into hydride-forming metal nanoparticles induces a change in their localized surface plasmon resonance frequency, which leads to a color change that is resolved in a spectroscopic measurement in the visible light spectral range. (Image by the Authors of the manuscript.)
Accelerating Plasmonic Hydrogen Sensors for Inert Gas Environments by Transformer-Based Deep Learning
Viktor Martvall, Henrik Klein Moberg, Athanasios Theodoridis, David Tomeček, Pernilla Ekborg-Tanner, Sara Nilsson, Giovanni Volpe, Paul Erhart, Christoph Langhammer
arXiv: 2312.15372

The ability to rapidly detect hydrogen gas upon occurrence of a leak is critical for the safe large-scale implementation of hydrogen (energy) technologies. However, to date, no technically viable sensor solution exists that meets the corresponding response time targets set by stakeholders at technically relevant conditions. Here, we demonstrate how a tailored Long Short-term Transformer Ensemble Model for Accelerated Sensing (LEMAS) accelerates the response of a state-of-the-art optical plasmonic hydrogen sensor by up to a factor of 40 in an oxygen-free inert gas environment, by accurately predicting its response value to a hydrogen concentration change before it is physically reached by the sensor hardware. Furthermore, it eliminates the pressure dependence of the response intrinsic to metal hydride-based sensors, while leveraging their ability to operate in oxygen-starved environments that are proposed to be used for inert gas encapsulation systems of hydrogen installations. Moreover LEMAS provides a measure for the uncertainty of the predictions that is pivotal for safety-critical sensor applications. Our results thus advertise the use of deep learning for the acceleration of sensor response, also beyond the realm of plasmonic hydrogen detection.

Symposium on AI, Neuroscience, and Aging featured on ANSA.it

The Symposium on AI, Neuroscience, and Aging has been featured on ANSA.it news, in an article with title: Simposio italo-svedese a Stoccolma sull’IA e la neuroscienza (Italian).

ANSA (an acronym standing for Agenzia Nazionale Stampa Associata) is the leading news agency in Italy and one of the top ranking in the world.

Optimal calibration of optical tweezers with arbitrary integration time and sampling frequencies – A general framework published in Biomedical Optics Express

Different sampling methods for the trajectory of a particle. (Adapted from the manuscript.)
Optimal calibration of optical tweezers with arbitrary integration time and sampling frequencies — A general framework
Laura Pérez-García, Martin Selin, Antonio Ciarlo, Alessandro Magazzù, Giuseppe Pesce, Antonio Sasso, Giovanni Volpe, Isaac Pérez Castillo, Alejandro V. Arzola
Biomedical Optics Express, 14, 6442-6469 (2023)
doi: 10.1364/BOE.495468
arXiv: 2305.07245

Optical tweezers (OT) have become an essential technique in several fields of physics, chemistry, and biology as precise micromanipulation tools and microscopic force transducers. Quantitative measurements require the accurate calibration of the trap stiffness of the optical trap and the diffusion constant of the optically trapped particle. This is typically done by statistical estimators constructed from the position signal of the particle, which is recorded by a digital camera or a quadrant photodiode. The finite integration time and sampling frequency of the detector need to be properly taken into account. Here, we present a general approach based on the joint probability density function of the sampled trajectory that corrects exactly the biases due to the detector’s finite integration time and limited sampling frequency, providing theoretical formulas for the most widely employed calibration methods: equipartition, mean squared displacement, autocorrelation, power spectral density, and force reconstruction via maximum-likelihood-estimator analysis (FORMA). Our results, tested with experiments and Monte Carlo simulations, will permit users of OT to confidently estimate the trap stiffness and diffusion constant, extending their use to a broader set of experimental conditions.

Talk by G. Volpe at the Symposium on AI, Neuroscience, and Aging, Stockholm, 27 November 2023

(Image by A. Argun)
Deep Learning for Imaging and Microscopy
Giovanni Volpe
Symposium on AI, Neuroscience, and Aging, Stockholm, Sweden, 27 November 2023
Date: 27 November 2023
Time: 15:55

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user- friendly graphical interface, DeepTrack 2.1 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.