On the determination of 3D position and orientation of spheroidal particles using defocusing and deep learning
Keywords:Defocusing, 3D PTV, General Defocusing Particle Tracking, non-spherical particles, deep learning
Tracking the 3D position of tracer particles or small objects like cells or unicellular organisms in miniaturized lab-on-a-chip or biomedical devices is complicated since it is often not possible in these setups to use multi-camera approaches. Most successful single-camera approaches for these applications are based on holography or defocusing. Holographic methods have been used to track complex objects such has bacteria (Bianchi et al. (2019)) and even to estimate their orientation (Wang et al. (2016)). However, these methods require a complex and expensive experimental setup which is not always available in research laboratories. On the other hand, defocusing methods work with conventional microscopic optics, are easy to implement, and have shown excellent results in 3D PTV experiments (Qiu et al. (2019)). One main drawback is that they normally work only with spherical and mono-dispersed tracer particles. A defocusing method that has potential to measure non-spherical particles is the General Defocusing Particle Tracking (Barnkob and Rossi (2020)) which is based on pattern recognition and can be conceptually extended to more complex tasks by extending the reference library of particle images, including not only spherical particles at different depth positions, but also non-spherical particles at different orientations. However, whether this approach could work in practice is still unknown. First, is the information contained in simple defocused images sufficient to reconstruct depth and orientation of non-spherical particles, and eventually under which circumstances? Second, how to practically collect the labelled reference images?
Copyright for all articles and abstracts is retained by their respective authors