Tracking and Environment Detection
Tracking and environment detection are a core technology for the application of virtual technologies. They pose special requirements with regard to real-time capability, data compatibility, user friendliness, robustness towards environmental influences and algorithm efficiency. Intuitive interaction with, for example, virtual reality applications, ergonomics examinations or tracking in the industrial production environment requires a precise and marker-free tracking. The target of the ARVIDA project is to provide marker-free tracking systems that will also work reliably in industrial environments. Within the framework of the reference architecture, modular systems are developed, which provide a suitable solution for any application that can be used without the need for expert knowledge. In order to achieve this, basic research is promoted that will deliver technological leaps in this core technology.
Marker-free outside-in tracking
Marker-free outside-in tracking systems include several sensors with a fixed location that monitor a dedicated measuring volume. The objects whose position and orientation (pose) are to be measured will move within this area. While marker-based tracking systems have been the standard for VR for a long time, ARVIDA is researching new marker-free approaches for the tracking of rigid objects (e.g. tools) and articulated objects (e.g. fingers). The project work focuses on scalable approaches with intelligent sensors, in order to achieve a high level of precision while maintaining a high measuring rate and low latency. The first ARVIDA status meeting in April 2015 presented a new system for the real-time tracking of rigid objects. In order to track an object, the object is first taught to the system. For this purpose, the user moves the object within the visual range of the cameras. The system analyses and saves recurrent features in a virtual object model. This model is used for the subsequent tracking of the object using several cameras. The approach uses two different image-based methods. For the tracking of textured objects, the characteristic features of the object are detected and examined with regard to their similarity. The correspondences, which are determined using several cameras, form the basis for the determination of the object's pose. In the event of untextured objects, an approach is used that employs conspicuous object edges for tracking, which can form, for example, at the air gap of a car door. The taught object geometry is then aligned with the edges monitored by the cameras in order to determine the pose. It is important to note that industrially relevant objects, in particular, such as an engine block, can be tracked using the edge-based procedure. The focus of the imminent developments is on the simplification of the usability using intuitive teaching as well as the improvement of the tracking regarding both precision and robustness. The tracking will be available to the ARVIDA project partners as planned via the REST interface. For virtual reality applications, the tracking of head positions and input devices will be a focal area of the subsequent project stages. The tracking services can be connected via the reference architecture, for example in order to interact with representations on projection screens. In the same way as for the marker-free object tracking, the challenges include a high level of precision while maintaining a high measuring rate and low latency.