Mobileeye's claim as mentioned was that one camera (not two up front like subaru and others?) can interpret images to determine 3D environments.
Yup, there's a couple of ways of doing this but the real key for auto-driving is that the camera is
moving. That gives paralax changes (closer objects to the side shift in relation to further-away object) and focus changes (if your point of focus is far away, stuff that's starting to get blurry is getting closer and is probably of concern therefore).
There are even tricks using nonspheric lenses to point different parts of the image onto a single sensor and get a little depth info that way to kind of fake having two cameras and get paralax without moving the camera, but that's COMPLICATED and doesn't give very good ranging (since the distance between two points of view is no more than the diameter of the lens). The other stuff just need "camera in relative motion" and software.