Current location - Plastic Surgery and Aesthetics Network - Wedding planning company - Bi' an wedding service automobile
Bi' an wedding service automobile
Tesla has made big news recently. Only internal mail claims that L2 autopilot has been realized, and this time it is completely "pure vision".

This time, in July 10, American time, Tesla FSD Beta V9.0 was finally pushed to American users. Compared with the previous version, V9.0 has undergone the biggest update in the history of FSD. Completely abandoned the use of radar. Moreover, Tesla recruited 2,000 car owners for internal testing and registered enthusiastically. However, bugs came out soon.

No, a netizen named Giacaglia watched the video sent by Tesla owners and immediately collected 1 1 error moments of FSD 9.0beta. From the animation, it is obvious that the current system can only be regarded as an assisted driving. If you get rid of it, or leave the supervision of people, something will still happen. The bugs you can see this time are:

Scene 1: After turning automatically, it hits the green belt in the middle of the road.

Scenario 2: Unidentified monorail in the road.

Scene 3: Breaking into the bus lane.

Scene 4: One-way street retrograde.

Scene 5: Keep changing lanes. When you turn right at the intersection, you can't decide which lane to take.

Scene 6: The compaction lines of automobiles are parallel; When you need to change lanes in an emergency, because the vehicles behind you are approaching, you miss the opportunity and can only meet at the next intersection.

Scene 7: There are signs on the road after overtaking, so it is necessary to merge the lines forcibly.

Scene 8: Change lanes in advance when turning left.

Scene 9: When turning left, I almost entered the opposite roadside parking space.

Scene 10: How many lanes does the car have to pass before turning left?

Scene 1 1: In a place where there are only stop signs, you see two stop signs.

"As a person in the direction of deep learning, I definitely dare not take a car with a neural network ..." "Yes, I am doing ML (machine learning) to see the current automatic driving, which is comparable to the doctor's surgery on himself when he meets a failed classmate." "Forcing machines to learn people (purely by vision) is the wrong development direction. Machines have their own advantages (you can freely install radar and other equipment for assistance) without using them. This is typical dogmatism and bookishness. " ……

These are very professional questions. So, is Tesla wrong about the technology tree? Although this issue is a matter of opinion, from the mainstream CV (computer vision)+radar route, Tesla is a bit like "Xi Du" Ouyang Feng. In order to reduce costs, pure vision has gone black, and the spirit of "never regret death" has always existed, but this is the life of consumers. ...

Why pure vision?

If Tesla honestly said that he was a driver's assistant, it was bad that Musk liked to "blow" autopilot from the beginning, and after blowing it, he admitted that he was an L2 driver's assistant in the internal mail. However, now that Musk has been blown into a myth, this god-making movement has made it impossible for Musk to ride on the back of a tiger.

And there are too many "special attacks" at home and abroad, including Dr. Herbert Diss, CEO of Volkswagen Group. Of course, it is another matter for Dr. Diss to paralyze the enemy by hitting Tesla.

Not to mention how many people have been killed by Tesla's self-driving technology that has been ranked at the bottom for many years. It is incredible to say that Tesla can reach the L4~L5 level of fully automatic driving by relying on the "pure vision" scheme.

Marc Pollefeys, a professor at the Federal Institute of Technology in Zurich, believes that Tesla is unlikely to give up the idea that fully automatic driving is close at hand. "Many people have paid for it (Tesla's FSD package), so they must keep hope," he said. "They are trapped in that story." This story has become a myth.

So, why did Tesla cancel the radar to use pure vision? Tesla has repeatedly stressed that it is difficult to integrate camera data and radar data. When the camera data and radar data conflict, the system will be more difficult to choose.

Therefore, Musk also said that instead of letting the two pull back to each other, it is better to choose only one and do it to the extreme. Moreover, in his view, Tesla's deep learning system is already 100 times stronger than millimeter-wave radar, and now millimeter-wave radar has begun to drag its feet.

At this year's CVPR (Computerized Vision and Pattern Recognition Conference), Andrej Capassi, the chief AI scientist of Tesla, also talked about the reason why Tesla is so stubborn. However, for the misguided Tesla, we still advise to remain calm.

Why? In fact, the reason is very simple. Although people are driving, mainly vision, but other senses are comprehensive, not useless. Such as hearing, body touch, and even conscious intuition. "In fact, human beings are almost unconscious when driving, so they can predict what to do next and avoid accidents." This is what Li, general manager of Chery Technology Co., Ltd. said at a forum of the World Artificial Intelligence Conference. At this point, Tesla is a bit too obsessed with vision.

General vision system and neural network

So in this CVPR, Andrej Capassi of Tesla introduced in detail the autopilot system based on deep learning, that is, what are the benefits of full vision?

Tesla's bottom spirit is to adopt two black technologies: "universal vision system" and "neural network". Of course, Capassi emphasized that, from a technical point of view, vision-based autonomous driving is more difficult to achieve, because it requires neural networks to achieve super-performance output only based on video input. "However, once a breakthrough is made, a universal vision system can be obtained and can be easily deployed anywhere on the earth."

"We abandoned the millimeter-wave radar, and the vehicles only rely on vision." Capassi believes that with the universal vision system, vehicles no longer need any supplementary information. Tesla always believes that collecting environmental information is one thing, and using environmental information is another. Moreover, the more kinds and quantities of sensors, the more difficult it is to coordinate and integrate with each other, and the final effect is likely to be only1+12, which is not worth the candle.

Tesla released FSD Beta V9.0 this time. Technically, the new algorithm calls all eight cameras used for autonomous driving, repairs cross-lens distortion and time difference, splicing them into panoramic vision, and then carries out real-time 3D modeling of the surrounding environment. That's what Tesla called "bird's-eye vision".

Specifically, Tesla converts 2D views into simulated lidar data, and then processes these data with (lidar) algorithm to obtain much better visual ranging accuracy than before. Don't you think it's strange that if you still need lidar algorithm, why not use lidar?

According to Tesla, its autopilot system is based on neural network feature recognition, prediction and adjustment. For learning the meaning of road environment projects, such as traffic signs, it is necessary to train the system through many scene materials. The more training, the more scenes the system can handle. Through the big data accumulated by millions of car owners, Tesla can easily drive independently on the current urban roads.

In fact, Musk always wanted to minimize the manufacturing cost of Tesla. In terms of cost, the current self-driving camera cost of Tesla Model 3 is only $65. The cost of lidar is basically above $65,438+0,000. You know, in 20 18, the price of 64-line laser radar HDL-64 of Wilton was as high as 75,000 dollars.

Of course, it is cost control that supports the repeated decline of Tesla car prices. However, Musk and Tesla are still too superstitious about the power of software and AI. For the "long tail problem" of autonomous driving, Tesla believes that it is problematic to solve it by AI and supercomputers. Even if it is 99% completed, the final 1% is still an insurmountable gap.

In addition, some foreign media believe that the traditional American car company General Motors will surpass Tesla at 202 1 because Tesla has fallen behind in autonomous driving, especially on the "pure vision" route.

Sensor fusion is the future.

As far as the limitations of pure vision are concerned, some insiders believe that the KPI index of perceptual detection ability can not be met in some extreme scenes. For example, some complex weather conditions, such as rainstorm, fog, dust, strong light and night, are very bad scenes for vision and lidar, and it is difficult to deal with them with one sensor. Mainly reflected in several major aspects:

1) Vision sensor blindness caused by weather and environmental factors (such as backlight glare, sandstorm occlusion, etc.). );

2) Small target objects in low-resolution visual perception system may cause delay in target recognition (such as speed bumps, small animals, cones, etc.). );

3) Foreign targets may not be matched due to untrained, and may be missed for identification (falling rocks on the road surface, falling tires of the preceding vehicle, etc.). );

4) The recognition requirements of the visual sensor itself and the high computing power requirements of visual recognition.

Even some autopilot tests or mature manufacturers have many crashes in intelligent driving, which pays a painful price for the failure of the sensor system. Therefore, sensor fusion is a necessary condition for building a stable sensing system. After all, the visual perception ability is limited, and it must be combined with millimeter wave radar or lidar to complement each other.

Looking back, if a scene in Tesla's internal BUG is not taken over by a human driver, it will turn into a traffic accident. Is this reassuring? Tesla's car owners are also worried.

In addition, we know that how the camera perceives depth is only part of the problem of autonomous driving. The most advanced machine learning that Tesla relies on only recognizes patterns, which means that it will struggle in new situations. There will be misjudgment when you are entangled.

Unlike human drivers, if the system doesn't encounter a scene, it can't reason what to do. "No artificial intelligence system knows what actually happened," said killian Weinberg, an associate professor of computer vision for autonomous vehicles at Cornell University.

One more thing, although FSD 9.0 has created a broader application scenario for the intelligent driver assistance system, under the premise of L2-level driver assistance system (not automatic driving system), these functions are still awkward, because it is impossible to get rid of them during driving. Moreover, human drivers not only need to hold the steering wheel, but also need to compete with the on-board computer system on urban roads, which increases the extra burden and psychological pressure.

These internal bugs of FSD BETA V9.0 will appear repeatedly on the actual road, which will undoubtedly create more hidden dangers for urban traffic. However, can this system be used in China's more complicated open road? There are still some small partners in the commune who have super confidence in Tesla. "Nobody has opened it. How do you know if it will be useful? " Yes, whether it's a mule or a horse, Tesla always takes it out for a walk.