In the event, held on Tuesday, 15 March 2022, 16:00-19:00, the ten teams that had gone through the training at the Startup Camp and developed their company ideas, pitched their companies on stage to a panel of entrepreneur experts, the other nine teams, and all business coaches at Chalmers Ventures. DeepTrack obtained the first place among the ten participants. Congrats!
Here a few pictures from the final pitching event of the Startup Camp.
Single-shot self-supervised particle tracking
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt, Giovanni Volpe
Particle tracking is a fundamental task in digital microscopy. Recently, machine-learning approaches have made great strides in overcoming the limitations of more classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on either vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, the data produced by experiments are often challenging to label and cannot be easily reproduced numerically. Here, we propose a novel deep-learning method, named LodeSTAR (Low-shot deep Symmetric Tracking And Regression), that learns to tracks objects with sub-pixel accuracy from a single unlabeled experimental image. This is made possible by exploiting the inherent roto-translational symmetries of the data. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy. Furthermore, we analyze challenging experimental data containing densely packed cells or noisy backgrounds. We also exploit additional symmetries to extend the measurable particle properties to the particle’s vertical position by propagating the signal in Fourier space and its polarizability by scaling the signal strength. Thanks to the ability to train deep-learning models with a single unlabeled image, LodeSTAR can accelerate the development of high-quality microscopic analysis pipelines for engineering, biology, and medicine.
Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, Carlo Manzo
The characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Thanks to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles, and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here, we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically-relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.
The article Active Droploids has been featured in a press release of the University of Gothenburg.
The study, published in Nature Communications, examines a special system of colloidal particles and demonstrates a new kind of active matter, which interacts with and modifies its environment. In the long run, the result of the study can be used for drug delivery inside the human body or to perform sensing of environmental pollutants and their clean-up.
Jens Grauer, Falko Schmidt, Jesús Pineda, Benjamin Midtvedt, Hartmut Löwen, Giovanni Volpe & Benno Liebchen
Nat. Commun. 12, 6005 (2021)
Active matter comprises self-driven units, such as bacteria and synthetic microswimmers, that can spontaneously form complex patterns and assemble into functional microdevices. These processes are possible thanks to the out-of-equilibrium nature of active-matter systems, fueled by a one-way free-energy flow from the environment into the system. Here, we take the next step in the evolution of active matter by realizing a two-way coupling between active particles and their environment, where active particles act back on the environment giving rise to the formation of superstructures. In experiments and simulations we observe that, under light-illumination, colloidal particles and their near-critical environment create mutually-coupled co-evolving structures. These structures unify in the form of active superstructures featuring a droplet shape and a colloidal engine inducing self-propulsion. We call them active droploids—a portmanteau of droplet and colloids. Our results provide a pathway to create active superstructures through environmental feedback.
The study, recently published in Biophysics Reviews, shows how artificial intelligence can be used to develop faster, cheaper and more reliable information about cells, while also eliminating the disadvantages from using chemicals in the process.
Extracting quantitative biological information from bright-field cell images using deep learning
Saga Helgadottir, Benjamin Midtvedt, Jesús Pineda, Alan Sabirsh, Caroline B. Adiels, Stefano Romeo, Daniel Midtvedt, Giovanni Volpe
Biophysics Rev. 2, 031401 (2021)
Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time-consuming, labor-intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of bright-field images by a conditional generative adversarial neural network (cGAN). We show that this approach can extract information from the bright-field images to generate virtually-stained images, which can be used in subsequent downstream quantitative analyses of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using bright-field images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually-stained images to extract quantitative measures about these cell structures. Generating virtually-stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell.
Quantitative Digital Microscopy with Deep Learning
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt, Giovanni Volpe
Applied Physics Reviews 8, 011310 (2021)
Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we introduce a software, DeepTrack 2.0, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.0 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.