Book “Simulation of Complex Systems” published at IOP

Book cover. (From the IOP website.)
The book Simulation of Complex Systems, authored by Aykut Argun, Agnese Callegari and Giovanni Volpe, has been published by IOP in December 2021.

The book is available for the students of Gothenburg University and Chalmers University of Technology through the library service of each institution.
The example codes presented in the book can be found on GitHub.

Links
@ IOP Publishing

@ Amazon.com

Citation 
Aykut Argun, Agnese Callegari & Giovanni Volpe. Simulation of Complex Systems. IOP Publishing, 2022.
ISBN: 9780750338417 (Hardback) 9780750338431 (Ebook).

Comparison of Two-Dimensional- and Three-Dimensional-Based U-Net Architectures for Brain Tissue Classification in One-Dimensional Brain CT published in Frontiers of Computational Neuroscience

CT is split into smaller patches. (Image by the Authors.)
Comparison of Two-Dimensional- and Three-Dimensional-Based U-Net Architectures for Brain Tissue Classification in One-Dimensional Brain CT
Meera Srikrishna, Rolf A. Heckemann, Joana B. Pereira, Giovanni Volpe, Anna Zettergren, Silke Kern, Eric Westman, Ingmar Skoog and Michael Schöll
Frontiers of Computational Neuroscience 15, 785244 (2022)
doi: 10.3389/fncom.2021.785244

Brain tissue segmentation plays a crucial role in feature extraction, volumetric quantification, and morphometric analysis of brain scans. For the assessment of brain structure and integrity, CT is a non-invasive, cheaper, faster, and more widely available modality than MRI. However, the clinical application of CT is mostly limited to the visual assessment of brain integrity and exclusion of copathologies. We have previously developed two-dimensional (2D) deep learning-based segmentation networks that successfully classified brain tissue in head CT. Recently, deep learning-based MRI segmentation models successfully use patch-based three-dimensional (3D) segmentation networks. In this study, we aimed to develop patch-based 3D segmentation networks for CT brain tissue classification. Furthermore, we aimed to compare the performance of 2D- and 3D-based segmentation networks to perform brain tissue classification in anisotropic CT scans. For this purpose, we developed 2D and 3D U-Net-based deep learning models that were trained and validated on MR-derived segmentations from scans of 744 participants of the Gothenburg H70 Cohort with both CT and T1-weighted MRI scans acquired timely close to each other. Segmentation performance of both 2D and 3D models was evaluated on 234 unseen datasets using measures of distance, spatial similarity, and tissue volume. Single-task slice-wise processed 2D U-Nets performed better than multitask patch-based 3D U-Nets in CT brain tissue classification. These findings provide support to the use of 2D U-Nets to segment brain tissue in one-dimensional (1D) CT. This could increase the application of CT to detect brain abnormalities in clinical settings.

CORDIS News article on ComplexSwimmers

An illustration of anomalous diffusion. (Image by Gorka Muñoz-Gil.)

CORDIS, the Community Research and Development Information Service of the European Commission, recently covered Giovanni Volpe’s ComplexSwimmers ERC-StG grant in a news:
Throwing down the scientific gauntlet to assess methods for anomalous diffusion.
The article highlights the joint results obtained by three EU-backed research projects (NOQIA, OPTOlogic and ComplexSwimmers) dealing with anomalous diffusion.

Raman Tweezers for Tire and Road Wear Micro- and Nanoparticles Analysis published in Environmental Science: Nano

Optical beam focused into the liquid: the tire particles are pushed away from the laser focus.

Raman Tweezers for Tire and Road Wear Micro- and Nanoparticles Analysis
Pietro Giuseppe Gucciardi, Gillibert Raymond, Alessandro Magazzù, Agnese Callegari, David Bronte Ciriza, Foti Antonino, Maria Grazia Donato, Onofrio M. Maragò, Giovanni Volpe, Marc Lamy de La Chapelle & Fabienne Lagarde
Environmental Science: Nano 9, 145 – 161 (2022)
ChemRxiv: https://doi.org/10.33774/chemrxiv-2021-h59n1
doi: https://doi.org/10.1039/D1EN00553G

Tire and Road Wear Particles (TRWP) are non-exhaust particulate matter generated by road transport means during the mechanical abrasion of tires, brakes and roads. TRWP accumulate on the roadsides and are transported into the aquatic ecosystem during stormwater runoffs. Due to their size (sub-millimetric) and rubber content (elastomers), TRWP are considered microplastics (MPs). While the amount of the MPs polluting the water ecosystem with sizes from ~ 5 μm to more than 100 μm is known, the fraction of smaller particles is unknown due to the technological gap in the detection and analysis of < 5 μm MPs. Here we show that Raman Tweezers, a combination of optical tweezers and Raman spectroscopy, can be used to trap and chemically analyze individual TWRPs in a liquid environment, down to the sub-micrometric scale. Using tire particles mechanically grinded from aged car tires in water solutions, we show that it is possible to optically trap individual sub-micron particles, in a so-called 2D trapping configuration, and acquire their Raman spectrum in few tens of seconds. The analysis is then extended to samples collected from a brake test platform, where we highlight the presence of sub-micrometric agglomerates of rubber and brake debris, thanks to the presence of additional spectral features other than carbon. Our results show the potential of Raman Tweezers in environmental pollution analysis and highlight the formation of nanosized TRWP during wear.

Featured in:
University of Gothenburg > News and Events: New technology enables the detection of microplastics from road wear
Phys.org > News > Nanotechnology:New technology enables the detection of microplastics from road wear
Nonsologreen > Green: Le Raman-tweezers per la guerra alle nanoplastiche che inquinano fiumi e mari

Invited Presentation by G. Volpe at FiO LS, 4 November 2021

DeepTrack 2.0 Logo. (Image from DeepTrack 2.1 Project)
DeepTrack 2.1: A Framework for Deep Learning for Microscopy
Giovanni Volpe
Invited Presentation at Frontiers in Optics + Laser Science
Online
4 November 2021
4:00 PM

We present DeepTrack 2.0, a software to design, train, and validate deep-learning solutions for digital microscopy. We demonstrate it for applications from particle localization, tracking, and characterization, to cell counting and classification, to virtual staining.

Link: FTh6A.3

Press release on Objective comparison of methods to decode anomalous diffusion

The article Objective comparison of methods to decode anomalous diffusion has been featured in the News of the University of Gothenburg.

The study, published in Nature Communications and co-written by researchers at the Soft Matter Lab of the Department of Physics at the University of Gothenburg, originates from the AnDi Challenge, a competition co-organised by Giovanni Volpe with researchers from University of Vic – Central University of Catalunya, Institute of Photonic Sciences in Barcelona, University of Potsdam, and Valencia Polytechnic University.

The challenge was held during March–November 2020 and consisted of three main tasks concerning anomalous exponent inference, model classification, and trajectory segmentation. The goal was to provide an objective assessment of the performance of methods to characterise anomalous diffusion from single trajectories.

Here the links to the press releases:
English: A scientific competition led to improved methods for analysing the diffusion of particles.
Swedish: En vetenskaplig tävling ledde till förbättrade metoder för att analysera diffusion av partiklar.

Objective comparison of methods to decode anomalous diffusion published in Nature Communications

An illustration of anomalous diffusion. (Image by Gorka Muñoz-Gil.)
Objective comparison of methods to decode anomalous diffusion
Gorka Muñoz-Gil, Giovanni Volpe, Miguel Angel Garcia-March, Erez Aghion, Aykut Argun, Chang Beom Hong, Tom Bland, Stefano Bo, J. Alberto Conejero, Nicolás Firbas, Òscar Garibo i Orts, Alessia Gentili, Zihan Huang, Jae-Hyung Jeon, Hélène Kabbech, Yeongjin Kim, Patrycja Kowalek, Diego Krapf, Hanna Loch-Olszewska, Michael A. Lomholt, Jean-Baptiste Masson, Philipp G. Meyer, Seongyu Park, Borja Requena, Ihor Smal, Taegeun Song, Janusz Szwabiński, Samudrajit Thapa, Hippolyte Verdier, Giorgio Volpe, Arthur Widera, Maciej Lewenstein, Ralf Metzler, and Carlo Manzo
Nat. Commun. 12, Article number: 6253 (2021)
doi: 10.1038/s41467-021-26320-w
arXiv: 2105.06766

Deviations from Brownian motion leading to anomalous diffusion are found in transport dynamics from quantum physics to life sciences. The characterization of anomalous diffusion from the measurement of an individual trajectory is a challenging task, which traditionally relies on calculating the trajectory mean squared displacement. However, this approach breaks down for cases of practical interest, e.g., short or noisy trajectories, heterogeneous behaviour, or non-ergodic processes. Recently, several new approaches have been proposed, mostly building on the ongoing machine-learning revolution. To perform an objective comparison of methods, we gathered the community and organized an open competition, the Anomalous Diffusion challenge (AnDi). Participating teams applied their algorithms to a commonly-defined dataset including diverse conditions. Although no single method performed best across all scenarios, machine-learning-based approaches achieved superior performance for all tasks. The discussion of the challenge results provides practical advice for users and a benchmark for developers.

Invited Talk by G. Volpe at Microscopies and Spectroscopies: Accessing the Nanoscale, 28 October 2021

DeepTrack 2.0 Logo. (Image from DeepTrack 2.0 Project)
Quantitative Digital Microscopy with Deep Learning
Giovanni Volpe
Invited Talk at the XXXVI Trobades Cientifíques de la Mediterránia – Josep Miquel Vidal
Microscopies and Spectroscopies: Accessing the Nanoscale
Menorca, Spain
28 October 2021
11:40 AM

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we introduce a software, DeepTrack 2.0, to design, train and validate deep- learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user- friendly graphical interface, DeepTrack 2.0 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Press release on Active Droploids

The article Active Droploids has been featured in a press release of the University of Gothenburg.

The study, published in Nature Communications, examines a special system of colloidal particles and demonstrates a new kind of active matter, which interacts with and modifies its environment. In the long run, the result of the study can be used for drug delivery inside the human body or to perform sensing of environmental pollutants and their clean-up.

Here the links to the press releases:
English: Feedback creates a new class of active biomimetic materials.
Swedish: Feedback möjliggör en ny form av aktiva biomimetiska material.

The article has been features also in Mirage News, Science Daily, Phys.org, Innovations Report, Informationsdienst Wissenschaft (idw) online, Nanowerk.

Keynote Talk by G. Volpe at CIIBBI, 15 October 2021

DeepTrack 2.0 Logo. (Image from DeepTrack 2.0 Project)
Deep Learning for Microscopy with Biomedical Applications
Giovanni Volpe
Keynote Talk at the 2nd International Congress of Biomedical Engineering and Bioengineering
Online
15 October 2021
14:00 CEST

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we introduce a software, DeepTrack 2.0, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.0 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.