October 29, 2024

Vehicle-to-vehicle (V2V) and vehicle-to-everything (V2X) vehicular communications systems have been touted over the years as the key drivers of future safety measures and collision avoidance technologies. These systems enable cars, commercial vehicles, and other vehicles to exchange real-time data about their location and velocity. This assists traffic and road hazard updates and enhances future safety assistance functions.

Although V2V is yet to make a significant impact on driving, it has the potential to improve safety and be an integral part of the future. It will accomplish this in a way that was unimaginable when it was first developed. The future of V2V will depend heavily on a piece of technology that was not available when it was first created: car cameras.

V2V was initially designed to send sensor data over very low bandwidth connections. The early V2V designers didn’t consider AI, vision data, or connectivity that could support vision-based data. As car cameras become more common, they will be able to transform V2V and the systems it provides.

Are car cameras worth it?

In the coming years, car cameras will be ubiquitous. Each car will have multiple cameras mounted around the vehicle and within the cabin. They are used primarily as a second pair of eyes for the driver, showing behind the car while reversing, supporting ADAS functionality and providing parking assistance. Dash cams also have strong evidence functions. These cameras record and track events and collisions, becoming security devices like smart doorbells. All these cameras cannot communicate data to other vehicles and infrastructure. The car does not transport the vision data it creates. AI does not use it, and it is not shared.

Imagine if car cameras could send information to other vehicles about what they see worldwide. Car cameras can do a better job of communicating what they see than V2V. Vision is the ultimate sensor. Vision is the ultimate sensor. It sees more than any other sensors and collects more data. Consider a pothole. A car can bump into the pothole using sensors or hard brakes. Or it can swerve around it. The system uses the sensor data from the car to determine if it is a pothole. It then sends the data to the vehicles nearby. Although sensor data is all around us and there are many good practices for analysing it, could you really be certain that you have spotted a pothole? The camera can see the pothole with its vision data. The vision data is combined with AI to “see” the pothole, even if it didn’t swerve or brake. This would make it easier to spot the pothole and eliminate the need for other cars to display the same behavior.

Another example of a situation in which vision is more important than sensor data is the free parking spot detection. This application helps drivers locate parking spots on the street. It can be fully deployed and reduce congestion in urban centers. There are two types of sensors that can be used to detect parking spots using sensor technology: park-in and park-out sensors. This method is capable of locating three to four spots per day, with the additional challenge of determining if the spots are genuine parking spots. Because cars only can add data to the parking spots where they have parked. This is in contrast to vision-based parking spot identification, which collects data even before the car is parked. Parking spots may be available for cars driving on the streets, even those that they won’t use. These data can result in significantly more parking spaces. A study from Milan showed that cars with vision could find 30-40 parking spaces for free an hour, compared to the three to four spots per day previously mentioned.

This system can also be used to monitor pedestrians and road hazards as well as understand the effects of work zones. The possibilities are endless when cars share the data they gather from around the globe. Chain collisions, which are serious road hazards, offer an example. Standard V2V requires that all cars involved in a collision have V2V technology. The system will detect the chain from each car and infer that there is a pileup. With vision data, only one car can see the chain collision and create the appropriate alerts. This will send the exact same message (and more accurate).

Connectivity and computation

Vision superiority comes with a price. Vision data is not sent across vehicles as V2V was originally envisioned. (Imagine driving and receiving messages such as “hard brake ahead” from cars you don’t see) Vision data requires connectivity and computing. Vision data is very heavy and requires connectivity. The current networks are not able to carry it. This is where 5G networks are useful. This is not all. Computers are also required to run artificial intelligence models on the vision data. This will allow you to make sense of the data and recognize a parking spot, a pothole or hazard. It is important to use the connectivity budget sparingly and wisely in order to get the data you need. To avoid overruns and latencies, this will have to be done carefully and at the edge.

A shared vision

Vision-based V2V adds another dimension to the original vision. Original technology envisaged cars communicating with other cars within their vicinity. It is possible to find it difficult to understand the V2V world of driving. These messages can be nearly meaningless in some cases, such as when you sense a parking space free. A shared vision of multiple cars in one area is the solution. It’s like a high-definition mapping for your immediate surroundings. The individual data of each car will be used to build a larger picture using the shared data. This will make the free parking spot system more efficient and useful. Road hazard information will also be more useful and informative. This shared vision will help navigation maps, which currently rely on user input as well as some GPS and sensor data. It can be difficult to come up with a common vision when data comes from different car manufacturers. There are new standards, however. The new system would allow all cars to communicate with one shared memory, temporary, of the road. It also has connectivity and computing optimized to make this shared vision of road economically possible.

Leave a Reply

Your email address will not be published. Required fields are marked *