Fast Neural Scene Flow
Neural Scene Flow Prior (NSFP) is of significant interest to the vision community due to its inherent robustness to
out-of-distribution (OOD) effects and its ability to deal with dense lidar points.
The approach utilizes a coordinate neural network to estimate scene flow at runtime, without any training.
However, it is up to 100 times slower than current state-of-the-art learning methods.
We rediscover the distance transform (DT) as an efficient, correspondence-free loss function that dramatically speeds up
the runtime optimization, allowing for the first time real-time performance comparable to learning methods.