Roadmap for animate matter published on Journal of Physics: Condensed Matter

The three properties of animacy. The three polar plots sketch our jointly perceived level of development for each principle of animacy (i.e. activity, adaptiveness and autonomy) for each system discussed in this roadmap. The polar coordinate represents the various systems, while the radial coordinate represents the level of development (from low to high) that each system shows in the principle of each polar plot. Ideally, within a generation, all systems will fill these polar plots to show high levels in each of the three attributes of animacy. For now, only biological materials (not represented here) can be considered fully animated. (Image from the manuscript, adapted.)
Roadmap for animate matter
Giorgio Volpe, Nuno A M Araújo, Maria Guix, Mark Miodownik, Nicolas Martin, Laura Alvarez, Juliane Simmchen, Roberto Di Leonardo, Nicola Pellicciotta, Quentin Martinet, Jérémie Palacci, Wai Kit Ng, Dhruv Saxena, Riccardo Sapienza, Sara Nadine, João F Mano, Reza Mahdavi, Caroline Beck Adiels, Joe Forth, Christian Santangelo, Stefano Palagi, Ji Min Seok, Victoria A Webster-Wood, Shuhong Wang, Lining Yao, Amirreza Aghakhani, Thomas Barois, Hamid Kellay, Corentin Coulais, Martin van Hecke, Christopher J Pierce, Tianyu Wang, Baxi Chong, Daniel I Goldman, Andreagiovanni Reina, Vito Trianni, Giovanni Volpe, Richard Beckett, Sean P Nair, Rachel Armstrong
Journal of Physics: Condensed Matter 37, 333501 (2025)
arXiv: 2407.10623
doi: 10.1088/1361-648X/adebd3

Humanity has long sought inspiration from nature to innovate materials and devices. As science advances, nature-inspired materials are becoming part of our lives. Animate materials, characterized by their activity, adaptability, and autonomy, emulate properties of living systems. While only biological materials fully embody these principles, artificial versions are advancing rapidly, promising transformative impacts in the circular economy, health and climate resilience within a generation. This roadmap presents authoritative perspectives on animate materials across different disciplines and scales, highlighting their interdisciplinary nature and potential applications in diverse fields including nanotechnology, robotics and the built environment. It underscores the need for concerted efforts to address shared challenges such as complexity management, scalability, evolvability, interdisciplinary collaboration, and ethical and environmental considerations. The framework defined by classifying materials based on their level of animacy can guide this emerging field to encourage cooperation and responsible development. By unravelling the mysteries of living matter and leveraging its principles, we can design materials and systems that will transform our world in a more sustainable manner.

Latent Space-Driven Quantification of Biofilm Formation using Time Resolved Droplet Microfluidics on ArXiv

Automated segnmentation of bacterial structures within a droplet. The image shows a bright-field microscopy view where a large biofilm region (green, outlined in blue) has been segmented from surrounding features. Small aggregates (yellow contours) are also highlighted. This segmentation enables structural differentiation of biofilm components for downstream quantitative analysis. (Image by D. Pérez Guerrero.)
Latent Space-Driven Quantification of Biofilm Formation using Time Resolved Droplet Microfluidics
Daniela Pérez Guerrero, Jesús Manuel Antúnez Domínguez, Aurélie Vigne, Daniel Midtvedt, Wylie Ahmed, Lisa D. Muiznieks, Giovanni Volpe, Caroline Beck Adiels
arXiv: 2507.07632

Bacterial biofilms play a significant role in various fields that impact our daily lives, from detrimental public health hazards to beneficial applications in bioremediation, biodegradation, and wastewater treatment. However, high-resolution tools for studying their dynamic responses to environmental changes and collective cellular behavior remain scarce. To characterize and quantify biofilm development, we present a droplet-based microfluidic platform combined with an image analysis tool for in-situ studies. In this setup, Bacillus subtilis was inoculated in liquid Lysogeny Broth microdroplets, and biofilm formation was examined within emulsions at the water-oil interface. Bacteria were encapsulated in droplets, which were then trapped in compartments, allowing continuous optical access throughout biofilm formation. Droplets, each forming a distinct microenvironment, were generated at high throughput using flow-controlled pressure pumps, ensuring monodispersity. A microfluidic multi-injection valve enabled rapid switching of encapsulation conditions without disrupting droplet generation, allowing side-by-side comparison. Our platform supports fluorescence microscopy imaging and quantitative analysis of droplet content, along with time-lapse bright-field microscopy for dynamic observations. To process high-throughput, complex data, we integrated an automated, unsupervised image analysis tool based on a Variational Autoencoder (VAE). This AI-driven approach efficiently captured biofilm structures in a latent space, enabling detailed pattern recognition and analysis. Our results demonstrate the accurate detection and quantification of biofilms using thresholding and masking applied to latent space representations, enabling the precise measurement of biofilm and aggregate areas.

An in vivo mimetic liver-lobule-chip (LLoC) for stem cell maturation, and zonation of hepatocyte-like cells on chip published in Lab on a Chip

The image shows a liver-lobule-chip (LLoC) with 21 artificial lobules mimicking liver microarchitecture. Its PDMS design supports diffusion-based perfusion, shear stress, and nutrient gradients and enables iPSC-derived hepatic maturation and spatially organized, zonated function in 3D. (Image by C. Beck Adiels)
An in vivo mimetic liver-lobule-chip (LLoC) for stem cell maturation, and zonation of hepatocyte-like cells on chip
Philip Dalsbecker, Siiri Suominen, Muhammad Asim Faridi, Reza Mahdavi, Julia Johansson, Charlotte Hamngren Blomqvist, Mattias Goksör, Katriina Aalto-Setälä, Leena E. Viiri and Caroline B. Adiels
Lab on a Chip 25, 4328 – 4344 (2025)
doi: 10.1039/D4LC00509K

In vitro cell culture models play a crucial role in preclinical drug discovery. To achieve optimal culturing environments and establish physiologically relevant organ-specific conditions, it is imperative to replicate in vivo scenarios when working with primary or induced pluripotent cell types. However, current approaches to recreating in vivo conditions and generating relevant 3D cell cultures still fall short. In this study, we validate a liver-lobule-chip (LLoC) containing 21 artificial liver lobules, each representing the smallest functional unit of the human liver. The LLoC facilitates diffusion-based perfusion via sinusoid-mimetic structures, providing physiologically relevant shear stress exposure and radial nutrient concentration gradients within each lobule. We demonstrate the feasibility of long term cultures (up to 14 days) of viable and functional HepG2 cells in a 3D discoid tissue structure, serving as initial proof of concept. Thereafter, we successfully differentiate sensitive, human induced pluripotent stem cell (iPSC)-derived cells into hepatocyte-like cells over a period of 20 days on-chip, exhibiting advancements in maturity compared to traditional 2D cultures. Further, hepatocyte-like cells cultured in the LLoC exhibit zonated protein expression profiles, indicating the presence of metabolic gradients characteristic of liver lobules. Our results highlight the suitability of the LLoC for long-term discoid tissue cultures, specifically for iPSCs, and their differentiation in a perfused environment. We envision the LLoC as a starting point for more advanced in vitro models, allowing for the combination of multiple liver cell types to create a comprehensive liver model for disease-onchip studies. Ultimately, when combined with stem cell technology, the LLoC offers a promising and robust on-chip liver model that serves as a viable alternative to primary hepatocyte cultures—ideally suited for preclinical drug screening and personalized medicine applications.

Cross-modality transformations in biological microscopy enabled by deep learning published in Advanced Photonics

Cross-modality transformation and segmentation. (Image by the Authors of the manuscript.)
Cross-modality transformations in biological microscopy enabled by deep learning
Dana Hassan, Jesús Domínguez, Benjamin Midtvedt, Henrik Klein Moberg, Jesús Pineda, Christoph Langhammer, Giovanni Volpe, Antoni Homs Corbera, Caroline B. Adiels
Advanced Photonics 6, 064001 (2024)
doi: 10.1117/1.AP.6.6.064001

Recent advancements in deep learning (DL) have propelled the virtual transformation of microscopy images across optical modalities, enabling unprecedented multimodal imaging analysis hitherto impossible. Despite these strides, the integration of such algorithms into scientists’ daily routines and clinical trials remains limited, largely due to a lack of recognition within their respective fields and the plethora of available transformation methods. To address this, we present a structured overview of cross-modality transformations, encompassing applications, data sets, and implementations, aimed at unifying this evolving field. Our review focuses on DL solutions for two key applications: contrast enhancement of targeted features within images and resolution enhancements. We recognize cross-modality transformations as a valuable resource for biologists seeking a deeper understanding of the field, as well as for technology developers aiming to better grasp sample limitations and potential applications. Notably, they enable high-contrast, high-specificity imaging akin to fluorescence microscopy without the need for laborious, costly, and disruptive physical-staining procedures. In addition, they facilitate the realization of imaging with properties that would typically require costly or complex physical modifications, such as achieving superresolution capabilities. By consolidating the current state of research in this review, we aim to catalyze further investigation and development, ultimately bringing the potential of cross-modality transformations into the hands of researchers and clinicians alike.

Tutorial for the growth and development of Myxococcus xanthus as a Model System at the Intersection of Biology and Physics on ArXiv

Myxococcus xanthus colonies develop different strategies to adapt to their environment, leading to the formation of macroscopic patterns from microscopic entities. (Image by the Authors of the manuscript.)
Tutorial for the growth and development of Myxococcus xanthus as a Model System at the Intersection of Biology and Physics
Jesus Manuel Antúnez Domínguez, Laura Pérez García, Natsuko Rivera-Yoshida, Jasmin Di Franco, David Steiner, Alejandro V. Arzola, Mariana Benítez, Charlotte Hamngren Blomqvist, Roberto Cerbino, Caroline Beck Adiels, Giovanni Volpe
arXiv: 2407.18714

Myxococcus xanthus is a unicellular organism whose cells possess the ability to move and communicate, leading to the emergence of complex collective properties and behaviours. This has made it an ideal model system to study the emergence of collective behaviours in interdisciplinary research efforts lying at the intersection of biology and physics, especially in the growing field of active matter research. Often, challenges arise when setting up reliable and reproducible culturing protocols. This tutorial provides a clear and comprehensive guide on the culture, growth, development, and experimental sample preparation of M. xanthus. Additionally, it includes some representative examples of experiments that can be conducted using these samples, namely motility assays, fruiting body formation, predation, and elasticotaxis.

Presentation by C. B. Adiels at AI for Scientific Data Analysis, Gothenburg, 31 May 2023

Phase-contrast image before virtual staining. (Image reproduced from https://doi.org/10.1101/2022.07.18.500422.)
Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning
Caroline B. Adiels

Chemical live/dead assay has a long history of providing information about the viability of cells cultured in vitro. The standard methods rely on imaging chemically-stained cells using fluorescence microscopy and further analysis of the obtained images to retrieve the proportion of living cells in the sample. However, such a technique is not only time-consuming but also invasive. Due to the toxicity of chemical dyes, once a sample is stained, it is discarded, meaning that longitudinal studies are impossible using this approach. Further, information about when cells start programmed cell death (apoptosis) is more relevant for dynamic studies. Here, we present an alternative method where cell images from phase-contrast time-lapse microscopy are virtually-stained using deep learning. In this study, human endothelial cells are stained live or apoptotic and subsequently counted using the self-supervised single-shot deep-learning technique (LodeSTAR). Our approach is less labour-intensive than traditional chemical staining procedures and provides dynamic live/apoptotic cell ratios from a continuous cell population with minimal impact. Further, it can be used to extract data from dense cell samples, where manual counting is unfeasible.

Date: 31 May 2023
Time: 10:30
Place: MC2 Kollektorn
Event: AI for Scientific Data Analysis: Miniconference

“Coffee Rings” presented at Gothenburg Science Festival 2023

Coffee Ring exposition at science festival Gothenburg. (Photo by C. Beck Adiels.)
Our recent work on “coffee rings” was presented at the Gothenburg Science Festival, which, with about 100 000 visitors each year, is one of the largest popular science events in Europe.

On Wednesday 19th April 2023, Marcel Rey, Laura Natali, Daniela Pérez Guerrero and Caroline Adiels set up a stand in Nordstan.

In this guided exhibition, visitors were able to observe the flow inside a drying droplet using optical microscopes. They learned how the suspended solid coffee particles flow from the inside towards the edge of the coffee droplet, where they accumulate and cause the characteristic coffee ring pattern after drying.

Nowadays, the coffee ring effect presents still a major challenge in ink-jet printing or coating technologies, where a uniform drying is required. We thus shared our recently developed strategies to overcome the coffee ring effect and obtain a uniform deposit of drying droplets.

And finally, visitors were also offered a freshly-brewed espresso to not only drink but also to experience the “coffee ring effect” hands on.

Roadmap for Optical Tweezers published in Journal of Physics: Photonics

Illustration of an optical tweezers holding a particle. (Image by A. Magazzù.)
Roadmap for optical tweezers
Giovanni Volpe, Onofrio M Maragò, Halina Rubinsztein-Dunlop, Giuseppe Pesce, Alexander B Stilgoe, Giorgio Volpe, Georgiy Tkachenko, Viet Giang Truong, Síle Nic Chormaic, Fatemeh Kalantarifard, Parviz Elahi, Mikael Käll, Agnese Callegari, Manuel I Marqués, Antonio A R Neves, Wendel L Moreira, Adriana Fontes, Carlos L Cesar, Rosalba Saija, Abir Saidi, Paul Beck, Jörg S Eismann, Peter Banzer, Thales F D Fernandes, Francesco Pedaci, Warwick P Bowen, Rahul Vaippully, Muruga Lokesh, Basudev Roy, Gregor Thalhammer-Thurner, Monika Ritsch-Marte, Laura Pérez García, Alejandro V Arzola, Isaac Pérez Castillo, Aykut Argun, Till M Muenker, Bart E Vos, Timo Betz, Ilaria Cristiani, Paolo Minzioni, Peter J Reece, Fan Wang, David McGloin, Justus C Ndukaife, Romain Quidant, Reece P Roberts, Cyril Laplane, Thomas Volz, Reuven Gordon, Dag Hanstorp, Javier Tello Marmolejo, Graham D Bruce, Kishan Dholakia, Tongcang Li, Oto Brzobohatý, Stephen H Simpson, Pavel Zemánek, Felix Ritort, Yael Roichman, Valeriia Bobkova, Raphael Wittkowski, Cornelia Denz, G V Pavan Kumar, Antonino Foti, Maria Grazia Donato, Pietro G Gucciardi, Lucia Gardini, Giulio Bianchi, Anatolii V Kashchuk, Marco Capitanio, Lynn Paterson, Philip H Jones, Kirstine Berg-Sørensen, Younes F Barooji, Lene B Oddershede, Pegah Pouladian, Daryl Preece, Caroline Beck Adiels, Anna Chiara De Luca, Alessandro Magazzù, David Bronte Ciriza, Maria Antonia Iatì, Grover A Swartzlander Jr
Journal of Physics: Photonics 2(2), 022501 (2023)
arXiv: 2206.13789
doi: 110.1088/2515-7647/acb57b

Optical tweezers are tools made of light that enable contactless pushing, trapping, and manipulation of objects, ranging from atoms to space light sails. Since the pioneering work by Arthur Ashkin in the 1970s, optical tweezers have evolved into sophisticated instruments and have been employed in a broad range of applications in the life sciences, physics, and engineering. These include accurate force and torque measurement at the femtonewton level, microrheology of complex fluids, single micro- and nano-particle spectroscopy, single-cell analysis, and statistical-physics experiments. This roadmap provides insights into current investigations involving optical forces and optical tweezers from their theoretical foundations to designs and setups. It also offers perspectives for applications to a wide range of research fields, from biophysics to space exploration.

Roadmap on Deep Learning for Microscopy on ArXiv

Spatio-temporal spectrum diagram of microscopy techniques and their applications. (Image by the Authors of the manuscript.)
Roadmap on Deep Learning for Microscopy
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C.D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman
arXiv: 2303.03793

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning on bioRxiv

Phase-contrast image before virtual staining. (Image by the Authors.)
Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning
Zofia Korczak, Jesús Pineda, Saga Helgadottir, Benjamin Midtvedt, Mattias Goksör, Giovanni Volpe, Caroline B. Adiels
bioRxiv: 10.1101/2022.07.18.500422

Chemical live/dead assay has a long history of providing information about the viability of cells cultured in vitro. The standard methods rely on imaging chemically-stained cells using fluorescence microscopy and further analysis of the obtained images to retrieve the proportion of living cells in the sample. However, such a technique is not only time-consuming but also invasive. Due to the toxicity of chemical dyes, once a sample is stained, it is discarded, meaning that longitudinal studies are impossible using this approach. Further, information about when cells start programmed cell death (apoptosis) is more relevant for dynamic studies. Here, we present an alternative method where cell images from phase-contrast time-lapse microscopy are virtually-stained using deep learning. In this study, human endothelial cells are stained live or apoptotic and subsequently counted using the self-supervised single-shot deep-learning technique (LodeSTAR). Our approach is less labour-intensive than traditional chemical staining procedures and provides dynamic live/apoptotic cell ratios from a continuous cell population with minimal impact. Further, it can be used to extract data from dense cell samples, where manual counting is unfeasible.