Situated Intent

1 / 5
Synthesized Annotation
2 / 5
Object Detection
3 / 5
Semantic Segmentation
4 / 5
Training Example of E-Scooter Riders
5/ 5
Training Example of Non-Riders

Main Content

About

IUPUI has worked on the development of multiple datasets, participated in various workshops, and wrote a variety of papers. More information is provided on these topics below.

Pedestrian Dataset

Prediction of pedestrian behavior is critical for fully autonomous vehicles to drive in busy city streets safely and efficiently. The future autonomous cars need to fit into mixed conditions with not only technical but also social capabilities. It is important to estimate the temporal-dynamic intent changes of the pedestrians, provide explanations of the interaction scenes, and support algorithms with social intelligence.

The IUPUI-CSRC Pedestrian Situated Intent (PSI) benchmark dataset has two innovative labels besides comprehensive computer vision annotations. The first novel label is the dynamic intent changes for the pedestrians to cross in front of the ego-vehicle, achieved from 24 drivers with diverse backgrounds. The second one is the text-based explanations of the driver reasoning process when estimating pedestrian intents and predicting their behaviors during the interaction period. These innovative labels can enable computer vision tasks like pedestrian intent/behavior prediction, vehicle-pedestrian interaction segmentation, and video-to-language mapping for explainable algorithms. The dataset also contains driving dynamics and driving decision-making reasoning explanations.

Electric Scooter Dataset

situated-intent.net/e-scooter_dataset/

E-scooters have become ubiquitous vehicles in major cities around the world. The numbers of e-scooters keep escalating, increasing their interactions with other cars on the road. Compared to traditional vulnerable road users, like pedestrians and cyclists, e-scooter riders not only have different appearances but also behave and move differently. This situation creates new challenges for vehicle active safety systems and automated driving functionalities.

Detection is the first step for intelligent systems and AI algorithms to mitigate the potential conflicts with e-scooter riders. In this project, we propose a small benchmark dataset for e-scooter rider detection task, and a trained model to support the detection of e-scooter riders from RGB images collected from natural road scenes. The dataset contains equal number of cropped images of e-scooter riders and other vulnerable road users. For the e-scooter rider detector, we propose an efficient pipeline built over two existing state-of-the-art convolutional neural networks (CNN), You Only Look Once (YOLOv3) and MobileNetV2. We fine-tune MobileNetV2 over our dataset and train the model to classify e-scooter riders and pedestrians. We obtain a recall of around 0.75 on our raw test sample to classify e-scooter riders with the whole pipeline. Moreover, the classification accuracy of trained MobileNetV2 on top of YOLOv3 is over 91%, with precision and recall over 0.9.

Workshops

ITSC-2023: Prediction Of Pedestrian Behaviors For Automated Driving

https://itsc2023wpbp.webflow.io/

With the progress of automated driving technologies, self-driving cars are driving safely on highways and freeways in most circumstances. However, the lack of safe and smooth interactions with pedestrians becomes one of the significant obstacles preventing fully autonomous vehicles in city streets. Protections of vulnerable road users like pedestrians are of the highest priority in traffic safety, and crashes with pedestrians will significantly impact the trust and public attitudes towards the new mobility technology. Disruptive interactions with pedestrians may also lower both the riding experiences and driving efficiency in the pedestrian-rich road environments. It is vital to understand pedestrians and make motion planning based on the predictions of their behaviors.

Besides pedestrian detection and tracking, many research efforts have been put into recognizing pedestrians' behaviors and predicting their trajectories in the past few years. The achievements can predict potential crashes and make motion-planning decisions accordingly in a longer duration. This workshop will focus on the detection, recognition, and prediction of pedestrian behaviors for automated driving cars to interact with them smoothly. The goal is to build a platform for sharing state-of-the-art models, algorithms, and datasets in the field and identifying research needs and directions.

Workshops Held in Previous Years

IV-2022: Prediction Of Pedestrian Behaviors For Automated Driving

https://iupuitasi.github.io/iv2022/

ITSC-2021: Detection, Recognition, and Prediction of Pedestrian Behaviors

https://itsc2021.github.io/WPB/