Some users have asked me if PLNAR is magic because they can't figure out how it works. PLNAR uses Apple's new ARKit functionality available in iOS 11.  We are leveraging ARKit to give us the coordinates from which we build the room plans.  The process combines information from the iOS device's motion sensing hardware with computer vision analysis of the scene visible to the device's camera.   It recognizes notable features in the scene image, tracks differences in the positions of those features across video frames, and compares that information with motion sensing data.  The result is a high-precision model of the device's position and motion.  Once we have that and we have detected the horizontal plane, we can measure everything.  He is a good link if you want to learn more:
https://developer.apple.com/documentation/arkit/understanding_augmented_reality

Did this answer your question?