Explainable Prediction of Accident Hotspots at Street Intersections
Ensuring the safety of vulnerable road users (VRUs), such as pedestrians, cyclists, and motorcyclists, is a growing concern in urban environments. Street intersections are particularly critical points, accounting for a significant proportion of traffic accidents involving VRUs. With the increasing complexity of road networks and growing traffic volumes, developing predictive models to assess accident risk at intersections has become essential for proactive safety measures and urban planning. Advances in artificial intelligence (AI), like deep learning (DL) techniques, and novel data sources, like connected vehicle data, offer promising opportunities to improve road safety by identifying high-risk areas and supporting data-driven interventions.
SOTERIA is then proposing a novel approach to predict the probability for a street intersection to be a hotspot, i.e., an area with a high risk of accidents involving VRUs. The approach relies on two modules:
- a prediction module which, given an intersection, uses DL to calculate its probability of being a hotspot;
- an explanation module which, given a prediction, provides its interpretation using explainable AI tools.
DL Prediction Module
One of the key innovations of SOTERIA is its DL prediction module, designed to assess the likelihood of a street intersection becoming a hotspot. This module is built on a graph representation of the road network, where intersections are represented as nodes and roads as edges connecting them. Each node and edge is enriched with detailed road infrastructure data, such as pedestrian crossings, parking spots, and cycle lanes. What sets SOTERIA apart is its integration of connected vehicle data—events like braking, acceleration, and turning maneuvers—which provide real-time insights into driving behaviour. Using a graph neural network (GNN), the model processes these features to estimate the probability of an intersection being a future hotspot. Traditionally, hotspot identification relies on past accident records(1), but SOTERIA's approach enables proactive risk assessment, detecting potential danger zones before accidents occur. Moreover, this system continuously adapts to changes in road infrastructure like working sites and driver behaviour, making it a dynamic and forward-looking tool for urban safety.
Explainable AI Module
While SOTERIA’s DL prediction module identifies high-risk intersections, its decisions are made by a black-box model, meaning we don’t fully understand why a specific intersection is classified as a hotspot. To address this, SOTERIA incorporates an Explainable AI module, which enhances the interpretability and reliability of predictions. This module operates in two key steps. First, GNNExplainer(2) identifies the most influential roads and infrastructure features contributing to the hotspot classification, helping pinpoint the specific factors driving the risk at an intersection. Next, GraphLIME(3) analyses these features to measure their positive or negative impact on the prediction, providing deeper insights into the learned patterns. By revealing the rules the model has learned, this module makes the AI’s decisions more transparent and trustworthy. Most importantly, it enables urban planners to take targeted safety actions—for example, adding a traffic light or adjusting pedestrian crossings—to mitigate risks before accidents occur. Once modifications are proposed, the deep learning module can recalculate the risk, helping municipalities test and prioritise interventions for a safer road network.
SOURCE
(1) Shafabakhsh, G., & Sajed, Y. (2022). Identification methods of accident hotspots and providing a model for evaluating the number and severity of accidents on roadways. International journal of transportation engineering, 10(1), 865-875.
(2) Ying, Z., Bourgeois, D., You, J., Zitnik, M., & Leskovec, J. (2019). Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32.
(3) Huang, Q., Yamada, M., Tian, Y., Singh, D., & Chang, Y. (2022). Graphlime: Local interpretable model explanations for graph neural networks. IEEE Transactions on Knowledge and Data Engineering, 35(7), 6968-6972.
This piece has been authored by UDEUSTO.
- Log in to post comments