Un artículo muy interesante sobre las dos tecnologías y la diferencia de rendimiento entre ellas:
https://www.reddit.com/r/Vive/comments/486uee/a_succinct_explanation_of_the_major_performance/
Un extracto de lo que dice Nate Mitchell en uno de los vídeos:
In the first linked video, Nate Mitchell (Oculus VP) specifically says: as people move very very quickly, even if we can't track the IR LEDs for a moment as they sort of smear across the camera, we still have that IMU data...
En el primer vídeo, Nate Mitchell (Vicepresidente de Oculus) dice específicamente: según la gente se mueve más y más rápido, incluso si no podemos seguir los LEDs iinfrarrojos en algún momento porque se difuminan a lo largo de la cámara, todavía tenemos los datos del IMU...
Parece que las cámaras no son capaces de seguir con precisión movimientos muy rápidos como son los que hacen los brazos/manos habitualmente en situaciones de "combate".
La explicación completa, en inglés:
Hand movement is way faster than head movement. The linear speed of the Vive lasers at the extent of their tracking range is 15ft*2*pi/(1s/60)=3856mph. You can't move your hand fast enough to change that by a meaningful percentage. You can basically hook the Vive Controller up to a string and whip it around as fast as you can and not lose tracking.
The Rift tracking system was optimized initially around only tracking a headset. Even at fast head movement speeds it loses an optical lock and falls back purely to IMUs. For fast hand speeds they are having lots of trouble. Two cameras forward-facing lets them re-id the LEDs quickly and gives them more SvN to work with in the edge pixel data. That's why they are stuck with that for fast hand movements. By lowering the emit-time of the LEDs they get a shorter exposure with less smear, but lose on signal vs noise. They then make up for it by having two cameras in front instead of one. With opposing cameras you can slowly walk around the room and play a point-and-click style adventure game with Oculus in opposing sensor mode, as long as you dont need to grab things off the ground due to FOV reasons, but you can't do things like swing swords unless you are in a small area hit by both cameras.
Vertical FOV is also low enough to have to tilt the camera to switch from seated to standing.
Photodiodes in Lighthouse don't have the reacquisition problem, each photodiode knows which photodiode it is, whereas the Rift Constellation system has to encode each LED's identifier in pulses over multiple frames. By having Touch visible through two offset front camera views, they can reacquire faster.