GM Volt Forum banner

1 - 12 of 12 Posts

·
Administrator
Joined
·
19,977 Posts
Discussion Starter #1

·
Premium Member
Joined
·
14,156 Posts
Very interesting. I hadn't realized how much the higher end lidar costs. The examples were interesting and illuminating (pun intended). I know Tesla is planning to use cameras because that is a much cheaper way to go, but seeing the low end lidar in action makes that seem unlikely to work.
 

·
Administrator
Joined
·
19,977 Posts
Discussion Starter #3
The low end (inexpensive) LIDAR video shows why it's not ready for highway speeds. The video produced by the high end version that costs more than most cars shows why the average car buyer would never be able to afford the tech.

Technically feasible high quality LIDAR data does not equal affordable, at least not yet.
 

·
Super Moderator
Joined
·
6,224 Posts
It was a good article. Didn't realize the prices and variances.
You see/read about LIDAR (multiples) in various car designs already. Nissan LEAF 2 has pro-pilot. Wonder if it includes LIDAR?
 

·
Super Moderator
Joined
·
6,224 Posts
What sensors does the Cadillac CT6 using Mobileye have? I didn't see where it has any lidar. Related to this thread -- GM must be using the expensive lidar in custom cars that are used to map ahead of time (pre-map) some number of roads.

http://www.motortrend.com/news/gm-super-cruise-2018-cadillac-ct6-with-auto-pilot/

Highways only. The system only becomes available once you’ve entered a meticulously lidar-mapped, divided, limited-access highway in the U.S. or Canada—or a limited-access stretch of a highway that switches between on-ramps and crossings (like California Highway 101).
Owner's manual mentioning Super Cruise:
http://www.cadillac.com/content/dam/Cadillac/Global/master/nscwebsite/en/home/Owners/Manuals_and_Videos/04_PDFs/2018-cad-ct6-owners-manual.pdf
 

·
Super Moderator
Joined
·
6,224 Posts
I found this Mobileeye (various manufacturers still use; GM mentioned a few times (Tesla does not)) presentation on using camera images.

It was very good with a lot of videos and explanations. I watched it on youtube via PC and at x1.5 speed.

Mobileeye's claim as mentioned was that one camera (not two up front like subaru and others?) can interpret images to determine 3D environments. (BTW, they have 1 of 4 pixels seeing the image in red. 3 of 4 in no-color which work well in shadows and night)

https://www.youtube.com/watch?v=Ywh0votSJxk

 

·
Registered
Joined
·
5,410 Posts

·
Super Moderator
Joined
·
6,224 Posts

·
Registered
Joined
·
5,410 Posts
Did you watch the presentation? GM is mentioned multiple times as their partner and what GM wants.
Yes...I love the fact that he runs out of time before he runs out of material as all good technical types always do...:)
 

·
Registered
Joined
·
5,410 Posts

·
Super Moderator
Joined
·
6,224 Posts
Tesla and Mobileye had a very messy parting of the ways...:rolleyes:
https://www.recode.net/2016/7/26/12285930/tesla-mobileye-self-driving-cars
Yes, this is well over a year old news that everyone knows. :) I was talking about GM using mobileye which mainly does camera image processing (as was rationalized in the above presentation). There are other players in this field. Nvidia for one and they work with many vehicle manufacturer. http://www.nvidia.com/object/automotive-partner-innovation.html

Nvidia have been doing it quite a while and have many details and presentations on what they can do naively and supplying different boards and sensor input processing (multiple cameras, radar, ultrasonic, lidar, etc, etc). They are more of an open than a closed system.

http://www.nvidia.com/object/drive-automotive-technology.html
https://www.youtube.com/results?search_query=nvidia+autonomous+driving
 

·
Registered
Joined
·
3,945 Posts
Mobileeye's claim as mentioned was that one camera (not two up front like subaru and others?) can interpret images to determine 3D environments.
Yup, there's a couple of ways of doing this but the real key for auto-driving is that the camera is moving. That gives paralax changes (closer objects to the side shift in relation to further-away object) and focus changes (if your point of focus is far away, stuff that's starting to get blurry is getting closer and is probably of concern therefore).

There are even tricks using nonspheric lenses to point different parts of the image onto a single sensor and get a little depth info that way to kind of fake having two cameras and get paralax without moving the camera, but that's COMPLICATED and doesn't give very good ranging (since the distance between two points of view is no more than the diameter of the lens). The other stuff just need "camera in relative motion" and software.
 
1 - 12 of 12 Posts
Top