Analyzing the inconsistency in driving patterns between manual and autonomous modes under complex driving scenarios with a VR-enabled simulation platform

Zheng Xu (Department of Civil Engineering, Monash University, Clayton, Australia)
Yihai Fang (Department of Civil Engineering, Monash University, Clayton, Australia)
Nan Zheng (Department of Civil Engineering, Monash University, Clayton, Australia)
Hai L. Vu (Department of Civil Engineering, Monash University, Clayton, Australia)

Journal of Intelligent and Connected Vehicles

ISSN: 2399-9802

Article publication date: 12 July 2022

Issue publication date: 11 October 2022

1071

Abstract

Purpose

With the aid of naturalistic simulations, this paper aims to investigate human behavior during manual and autonomous driving modes in complex scenarios.

Design/methodology/approach

The simulation environment is established by integrating virtual reality interface with a micro-simulation model. In the simulation, the vehicle autonomy is developed by a framework that integrates artificial neural networks and genetic algorithms. Human-subject experiments are carried, and participants are asked to virtually sit in the developed autonomous vehicle (AV) that allows for both human driving and autopilot functions within a mixed traffic environment.

Findings

Not surprisingly, the inconsistency is identified between two driving modes, in which the AV’s driving maneuver causes the cognitive bias and makes participants feel unsafe. Even though only a shallow portion of the cases that the AV ended up with an accident during the testing stage, participants still frequently intervened during the AV operation. On a similar note, even though the statistical results reflect that the AV drives under perceived high-risk conditions, rarely an actual crash can happen. This suggests that the classic safety surrogate measurement, e.g. time-to-collision, may require adjustment for the mixed traffic flow.

Research limitations/implications

Understanding the behavior of AVs and the behavioral difference between AVs and human drivers are important, where the developed platform is only the first effort to identify the critical scenarios where the AVs might fail to react.

Practical implications

This paper attempts to fill the existing research gap in preparing close-to-reality tools for AV experience and further understanding human behavior during high-level autonomous driving.

Social implications

This work aims to systematically analyze the inconsistency in driving patterns between manual and autopilot modes in various driving scenarios (i.e. multiple scenes and various traffic conditions) to facilitate user acceptance of AV technology.

Originality/value

A close-to-reality tool for AV experience and AV-related behavioral study. A systematic analysis in relation to the inconsistency in driving patterns between manual and autonomous driving. A foundation for identifying the critical scenarios where the AVs might fail to react.

Keywords

Citation

Xu, Z., Fang, Y., Zheng, N. and Vu, H.L. (2022), "Analyzing the inconsistency in driving patterns between manual and autonomous modes under complex driving scenarios with a VR-enabled simulation platform", Journal of Intelligent and Connected Vehicles, Vol. 5 No. 3, pp. 215-234. https://doi.org/10.1108/JICV-05-2022-0017

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Zheng Xu, Yihai Fang, Nan Zheng and Hai L. Vu.

License

Published in Journal of Intelligent and Connected Vehicles. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence maybe seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

The introduction of autonomous vehicles (AVs) triggers a revolution of the traditional transportation systems. In particular, it leads to a transition toward automation-supported travelling (Duarte and Ratti, 2018). Over recent years, several initial attempts at AV-enabled transportation systems have demonstrated the improvement in safety and mobility and the potential influence on people’s travel patterns (Van Brummelen et al., 2018; Soteropoulos et al., 2019).

It is widely accepted that AVs could form a distinctive driving pattern compared to conventional vehicles, in which traveling with shortened headways and improved motions are proved to improve transport efficiency (Hoogendoorn et al., 2014; Morando et al., 2018). Fundamentally, vehicle autonomy develops with the advance in the sensing systems. A fully AV (referred to as Level 5) (SAE International, 2018) relies on both vehicular and environmental sensor systems, and it is still far from the reality due to both physical and technical challenges (Campbell et al., 2018; Kocić et al., 2018; Elmquist and Negrut, 2020; Wang et al., 2020). Vehicular sensing systems are supported by artificial intelligence (AI)-based technologies that enable cars to perceive their surroundings and respond in the appropriate manner (Kuutti et al., 2020). The accuracy of environmental sensing is guaranteed by a fully covered detecting area. Advanced sensors such as ground-penetrating radar and LiDAR systems are generally required. However, the current sensor deployment has critical limitations (Kocić et al., 2018; Campbell et al., 2018). The calibration of the AV relies on technologies such as image processing (Henne et al., 2019; Reza et al., 2020) that have a high demand for large data sets to conduct training, where tens of thousands of tests are generally required to fine-tune the functions, including collision detection and avoidance. These tests or data sets are typically scarce (Huang et al., 2016; Van Brummelen et al., 2018), where their availability usually comes from specific case studies. For this reason, the trained model may not be transferable (Van Brummelen et al., 2018). Even though the sensing systems are well-developed, a significant amount of testing, e.g. hundreds of millions of miles of supervised driving, as pointed out by Kalra and Paddock (2016), is still needed to verify functional reliability and public safety.

Given these concerns, the study of AVs is dominated by simulation-based approaches or on-road trials in a simplified road environment (Rosique et al., 2019). In most simulation studies, a naturalistic driving environment is commonly used as it reproduces a three-dimensional (3D) environment, also known as high-fidelity 3D models, that closely reflects the real-life driving environment (Huang et al., 2016). Over recent years, CARLA by Intel (Dosovitskiy et al., 2017) and Drive Constellation by NVIDIA (NVIDIA, 2017) are convenient tools for AV testing. For example, they are used to examine the functions of different sensors (Rosique et al., 2019) and preview certain AV performances (Pérez-Gil et al., 2022) in naturalistic environments. While these types of tools are proved to be powerful, the test scenes are usually fixed or predefined. For instance, the tests are generally set up in certain traffic environments and non-transferable to other cases (Feng et al., 2021). In addition, these tools mainly focus on the technical development of the AVs, and thus are not suitable for user experience studies because they provide fewer opportunities for such investigation (Gao et al., 2021). On-road trial studies are complementary to simulation studies. In particular, they allow the collection of user experience more thoroughly, which is important to gain knowledge such as user perceptions and sensing failures. Nevertheless, these tests are expensive and time-consuming to design and implement, leading to limited test flexibility (Gu and Dolan, 2012).

There have been increasing research efforts to improve the simulation-aided approaches for autonomous driving studies (Rosique et al., 2019; Lim et al., 2021). Virtual reality (VR) and augmented reality-based simulations, in particular, attracted attention and applications. Shah et al. (2018) proposed an augmented simulator that allows interaction between cars and the real-world environment. Li et al. (2019) developed an on-road augmented autonomous driving simulation integrating traffic flow simulation. However, the simulator only offered limited viewing angles and environmental changes. Rong et al. (2020) developed a simulator based on the framework of LGSVL, which provides an end-to-end (E2E) and full-stack simulation. Fang et al. (2020) proposed a “scan-and-simulate” approach where the environment could be recreated via the data collected from a LiDAR-equipped vehicle. This approach allowed real-time scene creation, which improved the flexibility of AV testing to some extent. While these works introduced important and alternative simulation approaches for AV development and analysis, nearly all works focused on the vehicle perspective. There is potential to apply this type of innovative tool to involve other components, e.g. studying the user behavior while interacting with AVs (i.e. identifying and understanding the AV user’s perception and reaction) (Detjen et al., 2021).

Several studies (Choi and Ji, 2015; Bonnefon et al., 2016; Adnan et al., 2018) examined the users’ technology acceptance with respect to AVs and emphasized that it is essential to understand the principles behind the trust in vehicular autonomy to facilitate AV adoption. For example, they commonly concluded that system transparency, technical competence and situation management are vitally important to improve the AV’s trustworthiness. In the literature of AV-related studies, there is a branch of research dedicated to understanding a passenger (i.e. all people on board are considered passengers on an AV) view of AVs. More specifically, to analyze how passengers perceive certain situations while sitting in an AV (Jing et al., 2020). Again, emerging technologies such as virtual and mixed reality were used to gain knowledge. The most evident advantage of these technology-enabled approaches is the capability to assist behavioral analysis with experiments carried out in a naturalistic environment (Brown et al., 2018; Goh et al., 2019; Xu and Zheng, 2021). Particularly, Sportillo et al. (2018) investigated the take-over actions on AV passengers using VR technology. Brown et al. (2018) analyzed the interactions between human-driven vehicles and AVs in a mixed traffic environment at intersections based on the VR. Goedicke et al. (2018) proposed an integrated driving simulator that combines VR with on-road tests. Using such a platform, the reactions by the human participants toward driving instructions and training were collected and discussed. Following the spirit of Goedicke et al. (2018), Zou et al. (2021) proposed an improved VR-enabled experiment and studied passengers’ views on the AV while sitting in a moving vehicle in the real world. To understand human behavior on AVs more thoroughly, some studies examined the potential intervention behavior during autonomous driving (Kuribayashi et al., 2021; Yousfi et al., 2021). For instance, Kuribayashi et al. (2021) conducted initial research to analyze human intervention during the recognition phase in an AV and found out that such intervention might be effective in improving the naturalness of driving under certain circumstances. Yousfi et al. (2021) focused on the take-over maneuver during autonomous driving and concluded that the intervention during the take-over phase could be affected by the driver’s experience and mental workload. Such studies point to the need for further research on the difference in driving patterns between AVs and human drivers. Most of the aforementioned studies developed their own AV control models for user perception analysis (i.e. without complex intervention behavior) in a relatively fixed traffic environment (i.e. with presupposed traffic dynamics or moving trajectories), which might cause simulation bias due to the discrepancy between the virtual and reality (Wynne et al., 2019).

In addition to the simulation-based approaches, comprehensive on-road trials have been used in recent studies to measure the willingness of human drivers to abandon vehicle control under certain circumstances, thereby revealing the trust regarding AV technologies (Wilson et al., 2020; Yu et al., 2021). As a result of behavioral analysis under dynamic real-world environments, more realistic data could be obtained. Wilson et al. (2020) carried out an experiment on a public highway with a Level 2 automated vehicle to measure the engagement of human drivers during the activation of the advanced driver assistance systems (ADAS). Their results showed a low engagement rate and positive acceptance from individual participants. Yu et al. (2021) conducted an on-road study on public roadways to detect the use frequencies of ADAS and found that user acceptance regarding AV technologies could be interpreted by their hand positions on the steering wheel.

Referring to environmental dynamics in real-world case studies, Xu et al. (2021) incorporated traffic flow dynamics from PTV VISSIM into the VR-enabled simulation platform. Using such a platform, simulation quality could be improved by optimizing traffic representations. The study demonstrated the need for an authentic traffic flow environment in simulation studies to support the analysis of driving behaviors more efficiently.

To summarize, the existing studies made attempts to understand people’s behavior toward autonomous driving, but many of them were focused on either lower-level autonomous or simplified behavioral analysis. To the best knowledge of the authors, there is not yet any systematic study to analyze the inconsistency in driving behaviors in manual and autopilot modes under various driving scenarios (e.g. multiple scenes and various traffic conditions). To this end, this work examines the impact of vehicular autonomy (Level 4 AV) on human perception and reaction. Remarkably, this work will:

  • build up a platform in a VR-enabled naturalistic 3D environment for the analysis of humans’ behavioral adaptation when interacting with a highly automated vehicle;

  • develop and calibrate an AV control model based on machine learning algorithms in the simulated environment;

  • conduct human-in-the-loop driving experiment to collect humans’ behavior changes during manual and autopilot driving modes; and

  • examine driving intervention behavior (DIB) during the autopilot driving mode.

The rest of the paper is organized as follows. Section 2 presents the methodology involved in the proposed approach. Section 3 demonstrates the details of the AV development and evaluation. Section 4 illustrates the results of the human-subject experiment, and the conclusion is presented in Section 5.

2. Methodology

This work performs driving behavior analysis based on a specifically developed AV model that allows for both manual and autopilot driving modes within an integrated naturalistic simulation platform. Figure 1 demonstrates the basic workflow of the platform establishment. The methodology consists of three parts:

  1. the improvement of the simulation platform;

  2. the development of the AV; and

  3. the human-in-the-loop analysis.

In particular, the main tasks include:

  • applying high-definition rendering pipeline (HDRP) strategy for platform upgrade;

  • activating artificial neural network (ANN) and genetic algorithm (GA) integrated module for AV’s intelligence;

  • designing comparative driving tasks in the immersive environment for behavioral analysis.

2.1 Improvement of the simulation platform

As shown in Figure 1, the integrated naturalistic simulation platform consists of two parts, namely, the traffic dynamics (supported by micro-simulation) and the VR interface (supported by environment rendering in Unity). For modeling traffic dynamics, VISSIM was demonstrated with high quality in literature (Ejercito et al., 2017). This micro-simulation tool adopts real-world traffic rules such as right-of-way, overtaking and lane changing; it also reproduces vehicle motion behavior and driving interactions based on the Wiedemann 74 car-following model (Gettman and Head, 2003). For modeling the real-world features as well as the futuristic vehicle, the HDRP rendering strategy in Unity allows for realistic materials and textures, contributing to the ultimate visual senses (Thorn, 2020). The connection between the two terminals (i.e. Unity and VISSIM) is achieved via a component object model (COM) interface and driving simulator interface scripts (Ramadhan et al., 2019). Communication between VISSIM and Unity is in real time, which enables the necessary exchange and synchronization of information, including vehicle characteristics, speed, moving direction, signal control, traffic rules, etc. Thus, the integration of VISSIM and naturalistic 3D environments is expected to offer a much more authentic scene as the fundamental element of a driving simulator, contributing to flexible and extensible interaction between road users.

The proposed simulation environments are developed based on the model of Xu et al. (2021). Key study areas include the Monash Freeway Ramp No. 9, No. 10 and a local business center in Melbourne, Australia. The detailed reasons for the selection of study areas are as follows:

  • In general, road safety is a big concern in areas involving high speed or high conflicts. With regard to high speed, it is reported that over 1.35 million people are dying annually worldwide due to road crashes, of which accidents on the highway account for over 50% (WHO, 2018). While regarding conflicts, the majority of urban crashes happened during lane changing or other maneuvers that involve gap taking and interaction with pedestrians (Zheng et al., 2021).

  • In Australia, freeway collisions make up over 40% of fatal crashes annually, with merge sections being the primary location (BITRE, 2022). In particular, on-ramp merging sections incur high speed and high conflicts. According to Victoria Crash Statistics (Vicroads.vic.gov.au, 2021), more than ten serious crashes occurred in our selected study areas during 2014–2019, of which two fatal crashes happened within the merging sections.

  • The selected scene combines diverse types of road infrastructure, leading to the unavoidable conflicts such as the intruding vehicle conflicts from roadside parking, car-following conflicts due to stop-and-go conditions as a result of traffic signals, illegal or unexpected crossing conflicts such as pedestrian crossings, as well as the normal vehicle merging conflicts caused by vehicles entering the main road from the side recreational or commercial amenities, etc.

Particularly, the selected freeway case concerns a major freeway linking the city center of Melbourne to its suburban areas, and they reflect two standard geometric designs for on-ramp sections in Australia (e.g. the transition section is by guideline to be connected by straight and bend-shaped roads with a 15-degree slope). The urban driving environment consists of a typical two-lane arterial road crossing a local business center with a speed limit of 50 km/h. Both scenarios are relevant to the daily commuting routine of the general population, so it is of value to reveal the inconsistency in driving patterns between human drivers and the AV and identify potential scenarios in which the AV might fail to react. Figure 2 illustrates the improved features of the simulation environments from the VR views.

2.2 Autonomous vehicle development approach

The developed AV aims to feature normal vehicular functions and intelligence. Control algorithms for the AV are developed based on the machine learning (ML) agents scheme in Unity (Juliani et al., 2018), in which AI agents serve as the vehicle sensing system. Unlike the modular-based approaches in specific element development, such as the perception system of AVs (Liu, 2020), the AV system in this study is based on an E2E AI learning approach where the entire vehicle driving is supported by machine learning. The specifications, such as sensing coverage, vehicle suspension, default acceleration rate and braking rate, are defined according to the basic vehicle design in real life. For driving simulators, such physics are generally controlled and reflected by revolutions per minute (RPM) values and gear rates (Wynne et al., 2019). Table 1 summarizes the basic features of the developed AV in this study.

The basic vehicular functions include vehicle stability and the general control of lateral and longitudinal dynamics, in which the vehicle stability corresponds to characteristics such as vehicle mass, tire forces, yaw rate, side slip rate, etc., and lateral and longitudinal dynamics can be written as:

(1) Lateral Acceleration ay=Vβ̇+Vφ̇
(2) Longitudinal Acceleration ax=Fxf+FxrFaeroRxfRxrmgsinα
where V denotes vehicle speed, β̇ denotes side slip rate, φ̇ denotes yaw rate, Fx and Rx denote tire forces and road rolling resistance, respectively, mgsina represents the gravitational force due to the road inclination.

A series of background algorithms in ML agents work collaboratively in developing the general vehicular functions. Such algorithms include:

  • soft actor-critic (SAC) (Haarnoja et al., 2018) for maintaining virtual objects’ robustness and stability, which helps generate a reasonable gravity, side slip rate and yaw rate in the virtual environment;

  • proximal policy optimization (RPO) (Schulman et al., 2017) for maintaining the AV’s lateral control; and

  • behavioral cloning (BC) (Hussein et al., 2017) for maintaining the AV’s longitudinal control.

Based on the logical vehicle lateral and longitudinal dynamics, vehicle intelligence can be further developed by absorbing driving patterns from empirical vehicle movements within the simulation platform. Such empirical vehicle motions consist of various human-performed trajectories [i.e. experimental data collected by a driving simulator in Xu et al. (2021)] and stochastic trajectories based on the micro-simulation (i.e. the randomly generated trajectory data from VISSIM). Such a combination of data ensures that all attributes for AV training could be contained. Different variables, including position, speed, acceleration, etc., are involved, and they can be represented as:

(3) x=[x1,Uxi,Ux1,Mxj,M], xϵX,
where xi,j denotes the variables (i.e. vehicles with a specific speed and acceleration rate), i and j are vehicle IDs, U and M denote urban road and freeway merging, respectively. X denotes the entire set of the variables.

An ANN and GA integrated module manages the information processing, in which the ANN processes the variables with respect to a specific xi,j to recognize the surrounding vehicles and generated outputs (i.e. steering and acceleration) based on the distance, and GA optimizes the outputs from ANN for motion smoothing. More specifically, the input to the ANN consists of distance values from the AV to its surrounding objects, such as the distance between the car and curbs, other moving vehicles and pedestrians, while the outputs are vehicle maneuvers such as speed and torque control. Considering the AV as an ellipsoid, the distance between any object to the AV can therefore be determined by the point-to-ellipsoid distance. A general corollary is as follows (Uteshev and Goncharova, 2018):

(4) For the point:Xi=[xi,yi,zi] and the ellipsoid V(x,y,z): x2a2+y2b2+z2c2=1

The distance equation can be determined as:

(5) d=|Φ(μ, t)|=μ4+A1μ3+A2μ2+A3μ+A4

in which,

(6) A1=xi2+yi2+zi2ta2b2c2
(7) A2=a2b2c2[(1b2c2+1a2c2+1a2b2)t+(xi2a4+yi2b4+zi2c4)(1a2+1b2+1c2)Vi]
(8) A3=a2b2c2[(Vi(1a2+1b2+1c2)t]
(9) A4= a2b2c2t & Vi=V(xi,yi,zi)

where µ denotes the multiple zeros of a quartic polynomial Φ(µ, t), and it can be calculated with a given substitution t. V0 represents a specific point on the ellipsoid V(x, y, z). The equations are the basic corollary to verify if a random point Xi = [xi, yi, zi] is on the ellipsoid V(x, y, z), or to measure the distance from such a point to the ellipsoid.

Then, the changes in vehicle speed and torque can be determined based on a calculated d and the current speed. Figure 3 illustrates the basic framework of the machine learning process involved in this study, and Figure 4 presents an example of the working status of the AV model based on such principles in the VR environment.

In Unity, the aforementioned neural network and GA are created by C# programming. The main function of the ANN is to produce the output for acceleration (0,1), and the turning angle (−1,1). The GA then works on converting the output to a non-linearly form to make the operation of the AV smoother. In particular, the coding details are presented in the following algorithms.

ANN algorithm in C# format1 Set RunNetwork (float a, float b, float c)2 Set inputLayer[0, 0] = a3 Set inputLayer[0, 1] = b4 Set inputLayer[0, 2] = c5 Apply inputLayer = inputLayer.PointwiseTanh()6 Calculate hiddenLayers[0] = ((inputLayer *       weights[0]) + biases[0]).PointwiseTanh()7 for (int i = 1; i < hiddenLayers.Count; i++)8      do hiddenLayers[i] = ((hiddenLayers[i − 1] *               weights[i]) + biases[i]).PointwiseTanh()              outputLayer = ((hiddenLayers             [hiddenLayers.Count-1]*weights             [weights.Count-1])+ biases.Count-1]).              PointwiseTanh()9 end10 Return (Sigmoid(outputLayer[0,0]),        (float)Math.Tanh(outputLayer[0,1]))//         First output is     acceleration and second          output is steeringGA algorithm in C# format1 Set Mutate (NNet[] newPopulation)2   for (int i = 0; i < naturallySelected; i++)3       for (int c = 0; c < newPopulation[i].            weights.Count; c++)4          if (Random.Range(0.0f, 1.0f) <               mutationRate)5       do newPopulation[i].weights[c] =            MutateMatrix(newPopulation[i].           weights[c])6 Set Matrix<float> MutateMatrix     (Matrix<float> A)7 Set int randomPoints = Random.Range    (1, (A.RowCount * A.ColumnCount)/7)8 Apply Matrix<float> C = A9   for (int i = 0; i < randomPoints; i++)10       do int randomColumn = Random.Range             (0, C.ColumnCount)              int randomRow = Random.Range              (0, C.RowCount)              C[randomRow, randomColumn] =                Mathf.Clamp(C[randomRow, randomColumn] +               Random.Range(−1f, 1f), −1f, 1f)11 Return C

Unlike the general predictive models, such as logistic regression, which describes the relationship between a dependent binary variable and a series of independent variables, the ANN allows the outputs directly from an activation function based on the inputs. However, it is worth noting that the matching between the inputs and outputs is via single-layer perceptron in our ANN framework, because the ML agents in Unity compile trajectory information automatically and calculate point-to-ellipsoid distance directly. This simplifies the architecture of the ANN.

A commonly recognized defect of the single-layer neural network may contribute to a linear function that contains biases in the outputs. For example, the output (steering and speed estimations in this case) could go wrong, as reported in Da Silva et al. (2017). It may also limit the resulting motion of the vehicle. Under a linear mapping, the turning angle and the acceleration rate of wheels will only have a fixed incremental value, which does not reflect reality. To this end, the GA function is applied as the optimization tool. Specifically, a sigmoid function (Kriegeskorte and Golan, 2019) is used to convert the fixed output values from the ANN to multiple values (following a stochastic distribution). As a result, a non-linear representation is generated to map the situations to the motion controls. For this reason, the result motions of the developed AV are smoother in turning and acceleration.

2.3 Human-in-the-loop analysis

Using the developed simulation platform, a human-subject experiment is designed to investigate the driver’s response and interaction when sitting in the developed AV.

2.3.1 Participants and apparatus

Eighteen people (13 males and five females) are invited to the driving experiment. Their ages range from 21 to 48 (mean = 31.2, and standard deviation = 7.9). All participants hold valid driving licenses and have at least one year of independent driving experience. Each participant signed both the explanatory statement and consent form before the experiment, and the university ethics committee approved the study.

The driving simulator comprises three parts: a desktop computer with an NVIDIA RTX3090 graphics card to run the simulation, the VR headset to provide the vision of the driving scene and the Logitech G923 driving wheel and pedals to perform actions and maneuvers. Besides, a heart rate monitor by Polar H10 is used during the experiment. Figure 5 illustrates the experimental scene and the equipment setup.

2.3.2 Experiment design and data collection

The objective of the experiment is to compare drivers’ perceptions and reactions when in the manual driving mode and the autonomous driving mode. In this study, Level 4 vehicular autonomy is applied as the AV driving mode, i.e. the vehicle can operate automatically, but the human driver can still interrupt the autonomous control anytime (SAE International, 2018). There are three testing scenes:

  1. freeway merging at on-ramp under ramp metering;

  2. freeway merging at on-ramp without ramp metering; and

  3. lane changing in an urban area involving conflicts with pedestrians and parking vehicles.

Two driving tasks are requested in each scene. The first task is to drive as they usually do, and the second task is to sit in autopilot mode. In the second task, participants can take over the vehicle control when feeling unsafe by pressing a button on the steering wheel, and the autopilot mode will be terminated accordingly. Each participant will conduct a total of six driving tasks (i.e. three scenes with two tasks per scene) for around 15 min.

Throughout the experiment, the participants need to fill in four questionnaire surveys for data collection concerning demographics, simulation sickness, safety perception and simulator acceptance. In particular, a “demographics” survey collects the driving and VR experiences of the participants; a “Simulator Discomfort Questionnaire (SSQ)” (designed in Kennedy et al., 1993) records the 3D motion sickness by the participants; a “safety perception” survey asks participants’ perceived experiences during driving tasks; a “Van Der Laan (VDL)” survey (designed in Van Der Laan et al., 1997) askes the participants to evaluate the realism and indicate their acceptance toward the VR simulation platform as AV simulator (Table 2).

3. Autonomous vehicle development and evaluation

This section demonstrates the performance of the developed AV driving model within the integrated simulation platform. Subsection 3.1 presents the training and calibration of the driving control model. Subsection 3.2 presents relevant performance measurements. Then, to validate the driving model, a comparison between the conventional human-driven car and the developed AV regarding vehicle motions and accident rates is presented in Subsection 3.3. Finally, Subsection 3.4 presents the performance results of the developed AV in different simulation scenes.

3.1 Training data set and testing scenarios

The AV training is conducted in the freeway merging scene 01, including an urban driving section (around 400 m) and a freeway merging section (around 380 m). The training data set consists of 500 vehicle trajectories generated from VISSIM and 46 human-conducted drives extracted from Xu et al. (2021). Out of these 546 cases, 40 trajectories are defined as safe-behavior driving, as they drive safely and merge smoothly without fluctuations in acceleration while merging onto the freeway; ten trajectories are associated with crashes and labeled as unsafe behavior driving; the remaining 496 trajectories are general driving, where safe driving can be maintained but might be speeding or aggressive driving due to the fluctuations in speed and accelerations.

Different traffic flow conditions are applied to diversify the training scenarios (e.g. long and short headways toward its precedent car that can impact the control of the AV) for the AV development. The selected freeway traffic flow reaches a peak with a volume of over 8,000 veh/h, and the selected urban arterial has an average peak flow of over 4,000 veh/h. The traffic flow conditions are set from 0 to 8,500 veh/h and 0 to 4,500 veh/h for freeway and arterial, respectively. Both are consistent with the typical hourly traffic volume from the Victorian Government Data Directory (Discover.data.vic.gov.au, 2021). Traffic control measures such as ramp metering and traffic lights are implemented as well. The signal timing of the ramp metering mimics the operation in the real world, where the signals are coordinated based on the latest traffic flow (Vicroads, 2022). Specifically, the metering timing rates are 7–15 s of red, 1–2 s of green and 1–2 s of yellow. For standard traffic signals, the timing rates are 50–65 s of red, 30–40 s of green and 5–7 s of yellow.

There is a specific spawn point for the developed AV to enter the training scene, where a straight two-lane arterial is applied. The AV will be respawned during the training process once it encounters functional failures (i.e. perception or motion failures) or gets involved in a road crash (i.e. collision with objects, pedestrians or other vehicles). As the respawn is perceived as a new generation of the model, it is the so-called “generation.” To provide a dynamic training scenario, the background traffic flow and pedestrians are randomly generated every time the AV is respawned. Such traffic dynamics help create different vehicle-to-vehicle and vehicle-to-pedestrian interactions, contributing to diverse situational contexts. As a result, AV can be developed via a form of self-evolution by absorbing the experiences from previous failures.

A comparative test is conducted after the development of the AV model to evaluate its driving performance, after which the AV model is transferred to two added environments to validate its driving performance further. Figure 6 demonstrates the workflow of the AV development and evaluation.

3.2 Performance measurement

Several essential parameters are selected and measured as performance indicators to demonstrate the intelligence of the developed AV:

Speed variation: this is a general indicator to estimate whether the developed AV could perform human-resembled driving maneuvers. As an individual vehicle, the AV’s speed is measured based on observation over time and space. The instantaneous speed of the AV is defined as:

(10) uAV=dxdt=limit(t2t1)0x2x1t2t1
where uAV denotes the instantaneous speed, ti denotes the time interval during observation and xi denotes specific points on the road network that AV has passed.

Then, the average speed can be calculated by taking the arithmetic mean of the observation:

(11) uAV¯=1Ni=1Nui

The associated acceleration rate is estimated with respect to instantaneous speeds during the traveling:

(12) a¯=d2xidti2

Lateral deviation: this calculates the distance between the position of the AV and the centerline on a lane. The lateral position of a vehicle is traditionally measured by computing the lateral error Δl at a look ahead distance ds to zero (Kosecka et al., 1997). In this case, ML agents in Unity maintains a reasonable lateral acceleration ay, and the lateral deviation could then be estimated by measuring the yaw angle of the AV with respect to the centerline of the road:

(13) Δl= lc̈dsεr
where Δl denotes the lateral deviation, lc̈ represents the vehicle lateral control system, ds denotes the look-ahead distance and εr denotes the yaw error of the tractor relative to the road.

Accident rate: this is an important performance indicator to verify the AV’s capability of collision avoidance. It records how many times the developed AV got involved in crashes. Such a recording is based on the interaction between 3D objects in the simulation environments. The accident could be observed directly during the simulation, and ML agents counted the crash frequency simultaneously.

Conflict probability: this is an indicator to measure traffic conflicts. The conflict probability is converted by the time-to-collision (TTC) (Lee, 1976; Van Der Horst and Hogema, 1993; Morando et al., 2018) with the assistance of surrogate safety assessment model (SSAM)) (Gettman and Head, 2003) via the trajectory analysis, demonstrating the tendency of AV-involved traffic conflicts. TTC calculates the times expected to pass before two vehicles have collided if they remain on the same trajectory, and it is typically measured based on different situations:

(14) TTC={d2v2                if d1v1<d2v2<d1+l1+w2v1(side)d1v1                if d2v2<d1v1<d2+l2+w1v2(side)X1X2l1v2v1      if v2>v1 (rear end)X1X2v1+v2           (head on)
where vi, li, di, wi and Xi represent speed, length, distance, width and vehicle position, respectively, and corner marks (i.e. 1 and 2) denote two individual vehicles under such circumstances. A low TTC represents a high probability of conflict. An unsafe situation is generally represented by a TTC that less than 1.5 s.

3.3 Autonomous vehicle’s performance in the developing scene

The AV’s intelligence is formulated within 170 generations based on the framework of the ANN and GA integrated module. Figure 7 illustrates the AV training process. Figure 7(a)–(c) demonstrates the progress of the driving pattern development under free-flow conditions. After the first 40 generations, random motions (i.e. ignorance of lane marks), retrograde motions due to navigation failure and head collision toward side objects (i.e. walls, pedestrians or other vehicles) can be eliminated. As handling the static surrounding is sufficiently learned by the AV model, handling the dynamic objects in the surroundings, such as overtaking a car or engaging with a car while doing a lane-changing, is trained. The results are displayed in Figure 7(d)–(f), which takes 70 generations (from 40 to 110). For a more complicated interactive scene, e.g. the freeway merging, which involves more significant speed adaptation and mandatory lane changing, the control model requires another 40 generations (from 110 to 150) to ensure a secure merging. During generation 150 to 165, there are no crashes detected on urban driving roads, and three crashes are detected on the freeway merging section, in which the conflict between the AV and other on-ramp vehicles is the main cause. After 165 generations, the AV can merge onto the freeway securely and smoothly. Such results are shown in Figure 7(g)–(i).

To validate the developed AV’s overall performance, the comparison between the developed AV and previous human-controlled vehicle (Xu et al., 2021) in the freeway merging scene 01 is conducted. Note there are two scenarios applied, namely, with and without the ramp metering. Variables such as variations in merging speed and acceleration, and accident rates are extracted as the principal indicators. The comparison results are shown in Figure 8.

The comparison results demonstrate that the AV outperform human-controlled vehicle in such a driving scene under various conditions. In particular:

  • over 50% of the human-driven cars are found at high merging speeds (> 65km/h), and over 30% of them execute acceleration or deceleration with sharp rates (± 8 m/s2);

  • most AV merging speeds are within 50–60 km/h, which is an acceptable speed range (Daamen et al., 2010), while the acceleration/deceleration rates are within ± 2.5 m/s2; and

  • the developed AV is much safer than human-controlled vehicles, reflected in the reduced accident rate (< 1%).

3.4 The transferability of the developed autonomous vehicle

To demonstrate the validity of the developed AV, the model is then tested in two new environments, namely, the freeway merging scene 02 and the urban driving scene [Figure 2(b)–(c)]. The unique features of the two new environments are the different infrastructure design for freeway on-ramp and the more complicated interaction with other vehicles and pedestrians in an urban arterial. The challenge is that the trained AV was not exposed to such situational contexts, and thus, it would be interesting to see the performance given these new complexities.

It is found that the AV’s overall performance in new environments has a positive correlation with the number of simulations. As shown in Figure 9, it takes nearly 60 simulations until the performance of the AV becomes stable for all environments (every simulation run lasted for 3,600 s). In the new driving scenes, both the AV’s speed and lateral control fluctuate during the first 30–40 simulation runs. Relatively clear fluctuations in both the average speed and lateral deviation are identified. For example, the average speed ranges from 50–73 and 42–63 km/h in freeway merging scene 02 and urban driving scene, respectively, and the lateral deviation ranges from 0–2 m to the centerline of the driving lane in both scenes. Simultaneously, accidents frequently occur during this period due to conflict with other vehicles from minor road entrances or pedestrians. Such situations are expected because the calibrated lateral control was based on vehicle-to-vehicle interactions on multi-lane roads and a specific on-ramp design. The performance is improved after 50 simulations, where the average speed and lateral deviation are on a stable trend, and the accident rate decreases to around 2%. Full vehicular intelligence is developed after 70 simulations. In particular,

  • the vehicle motion is smooth and gentle, reflected in the average speeds of 65 and 48 km/h in the freeway merging scene and urban driving scene, respectively;

  • the vehicle can drive in the right middle of the lane, reflected as the lateral deviation close to 0; and

  • the safety of operation is ensured, reflected in the minimized accident rate (< 1%).

In terms of the probability of conflicts, on average, a relatively high value (>0.7) is found in all testing environments regardless of the actual number of crashes. Although a high potential risk was contained throughout the simulation, the actual accidents rarely occurred. During the testing period, the AV keeps a close distance from its leading car. This is because AV is able to maintain short headways toward its leading vehicles. Such behavior is consistent with the expectation that traffic flow efficiency can be increased without sacrificing safety; however, this also contributes to risky statistical results.

4. Experiment results

This section presents the results of the human-subject comparative experiments. In particular, Subsection 4.1 demonstrates the difference in driving performance between human drivers and AV; Subsection 4.2 illustrates feedback from participants concerning the simulation study.

4.1 Inconsistency in driving patterns between conventional vehicle and autonomous vehicle modes

This section presents the resulting actions of human drivers and the AV when facing certain situations and reflects the inconsistency in driving patterns. Furthermore, intervention behavior (by participants during the autopilot mode) can be frequently captured. As a general remark, the inconsistency in decision-making is significant, as the intervention happened in 44 autopilot cases (81.48%).

4.1.1 Driving behavior

The difference in driving patterns is clearly identified between the manual and the autopilot modes, especially in the freeway merging scenes. Figure 10 displays the comparison with respect to average speed, average acceleration rates and average task finish time. The task finish time is defined as the time a driver spent changing driving maneuvers (similar to reaction time). For instance, in the case of freeway merging, the “task finish time” measures the time from the vehicle arriving at the merging section to actually reaching the mainstream lane. In the urban driving scene, in a similar manner, it measures the time from recognizing a cut-in vehicle/pedestrian to actually taking action.

With respect to speed, it can be observed that human drivers generally drive at a slower speed than AVs (p < 0.005). Remarkably, in the case of freeway merging without the ramp metering, the speed reduction is more than 20% [see “Freeway Merging 02” in Figure 10(a)]. Contrastingly, other results indicate a different picture that human drivers tend to be more aggressive (p < 0.005). The acceleration rates by human drivers show larger fluctuations in both freeway merging and urban driving cases [see Figure 10(b)]; they range from −8 to 8 m/s2. With regard to task finish time, in all cases, it takes much less time for AVs to complete a certain task (p < 0.001). For example, the finishing time was 7.9 s for the human drivers, while 1.7 s for the AVs under the freeway merging scene 02.

4.1.2 Traffic conflicts and accidents

Traffic accidents happened only during the manual driving mode. There were nine crashes occurred, in which eight side collisions occurred in the freeway merging scenes (three in scene 01 with ramp metering and five in scene 02 without ramp metering), and one rear-end collision occurred in the urban driving scene. Figure 11 illustrates some of these collisions.

TTC is estimated to evaluate the level of crash risks, and 1.5 s is set as a safety threshold. The time headways between vehicles are measured, and the “risky” situations are therefore identified. It is found that the high-level risk area (i.e. TTC ≤ 1.0s) is concentrated in two sections: the freeway on-ramp and the merging sections. Furthermore, it appears that the risk area emerges with the trajectories of the AV (p < 0.005), because the TTC between the AV and its surrounding vehicles is always around 1.0 s. Based on such an estimation, it seems that accidents are potentially more likely to happen when under the AV mode.

The potential likelihood of a collision by the AV is also reflected by the reaction of the passengers who are sitting and riding along with the AV. As mentioned, intervention behavior during the autopilot mode was observed frequently, especially during the merging or cut-in situations. This indicates a low level of trust during the AV operation, even though no actual accident occurred. Taking freeway merging as an example, different first-person perspectives are shown in Figure 12. Human drivers’ reactions are affected by their perceptions. In manual driving mode [Figure 12(a)], their driving patterns are affected by moving vehicles on the mainstream, especially the target lane. Out of 18 participants, 14 (77.78%) tended to stop for 3–4 s at the merging section and keep as far as possible from the preceding vehicle, resulting in slower merging speeds and the TTC values larger than the safety threshold. Their typical first-person perspectives are displayed in Figure 12(a). Contrastingly, the quicker merging time, faster merging speed and small headways in the autopilot mode had a negative effect on participants’ perceptions. A high-speed motion in a high-risk area causes a strong visual impact, which is perceived as about-to-crash situations. The typical first-person perspectives are displayed in Figure 12(b).

4.1.3 Driving intervention behavior

Most participants (14 out of 18) interrupted the autopilot mode at least once during the experiment, particularly during merging and vehicle cut-in situations under high-flow traffic. In addition to the observation such as perceived hazards, the statistical results, such as the probability of conflicts (i.e. converted by TTC using SSAM), provide more evident clues for the intervention behavior. Taking the freeway merging as an example, Figure 13(a) and 13(b) shows conflict probability over the different levels of traffic flow on the mainstream and the on-ramp during manual and autopilot modes. It can be seen that the likelihood of conflicts is increased with the increments in the traffic flow during the manual driving mode, where the conflict probability reaches the highest (> 0.8) when on-ramp flow is over 600, or mainstream flow is over 6,000. Contrastingly, there is an irregular distribution in relation to the conflict probability identified during the autopilot mode, with a relatively high conflict probability (∼ 0.8) under any conditions. The developed AV is able to maintain headway less than 0.5 m or even 0.3 m as the safety threshold due to its intelligence. Nevertheless, classic safety measures such as the TTC would still suggest a high conflict probability. Passengers showed high reluctance toward such a driving pattern as keeping a short headway or spacing makes them feel unsafe.

The driving speed (p < 0.005) and relative speed (p < 0.005) affect the intervention behavior. In the total of 44 interventions, over 86% of interventions occurred when travel speeds were over 65 km/h in freeway merging scenes, and over 70% of interventions occurred when the driving speed was over 50 km/h in the urban driving scene. Both thresholds are considered to be “high” for the relevant scenarios. In general, the higher the vehicle speed is (compared to surrounding vehicles), the more likely the intervention occurs. Nevertheless, no significant effect of the driving distance on DIB was identified (p = 0.672); the intervention occurred throughout the driving tasks. Figure 13(c) shows the DIB in relation to the driving speed and distance.

4.1.4 Driver workload

One thing worth noting is that fewer intervention cases occurred when traffic was regulated, i.e. with ramp metering. This evidently explains that a stop before merging onto the freeway would improve the AV’s trustworthiness to some extent. This also links to the driver workload during the driving experiment.

The resulting HR is plotted in Figure 14. The overall trends indicate that drivers have a higher stress level during the autopilot mode. On average, the HR is 22 bpm (beats per minute) higher during the autopilot mode than in the manual driving mode. Compared to the 3-min baseline, freeway merging without the ramp metering caused the highest stress level, followed by urban driving and freeway merging with ramp metering. The increase in HR reflects the stress by participants toward the autopilot mode and the fact that they feel the need to intervene.

4.2 Feedback from participants

Table 3 summarizes the demographic information. It is worth noting that such sample composition covers a relatively broad range of age, driving experience and VR experience. Specifically, both young and middle-aged participants with driving experience vary from one to over 20 years are involved, and over 70% of participants had VR experience before attending the experiment.

Among the personal characteristics, driver experience matters to the driving patterns (p < 0.005). The experienced drivers (i.e. driving experience of over 20 years) performed much gentler during all the tasks in the manual driving mode. For example, their acceleration rates are mild and with small fluctuations (±4 m/s2). They tended to trust the autopilot mode more as they performed fewer interventions. Contrastingly, there are no significant interactions between the driving performance and age (p = 0.672), gender (p = 0.863) or VR experience (p = 0.677) identified.

In terms of the intervention behavior, over 77% of participants (14 out of 18) stated that they did not trust the autopilot mode. They felt unsafe in certain scenarios and performed the intervention. Nevertheless, no significant interactions (p = 0.861) between intervention and personal characteristics are identified.

According to the feedback from the safety perception survey (Table 4), over 72% (13 out of 18) of participants perceived near-crash situations during the AV mode and intervened. Nevertheless, no crashes actually occurred during the autopilot mode. All participants voted that the freeway merging scene 02 (i.e. merging without ramp metering) is the most dangerous scene, followed by the urban driving scene, and the freeway merging scene 01, which is consistent with their HR distribution. Note that Cronbach’s α indicates the consistency of the answers from participants, and it is sensitive to the number of items in the scale. As the internal consistency reliability is normally underestimated in exploratory research, the values between 0.6 and 0.7 are acceptable in this driving experiment (Nunnally and Bernstein, 1994).

The SSQ is a self-reported symptoms checklist that involves 16 symptoms, including fatigue, headache, eyestrain, nausea, etc. Such symptoms are closely related to the simulation discomfort, and they are named based on a four-point scale (from none to severe) (Kennedy et al., 1993). The results of the SSQ show that 3D motion sickness was a minor issue during the experiment, and in general, the use of VR simulation did not cause severe discomforts such as visual fatigue, nausea or headache. Minor sickness from the VR caused 3D motion was reported. There are still signs of uncomfortableness, including dizzy with eyes closed (16.67%), burping (22.22%), vertigo (16.67%) and sweating (16.67%). This suggests that the result of VR simulation did not affect by 3D motion sickness. Table 5 presents the details of the SSQ results.

Finally, the definitive survey regarding the simulator realism was conducted in the Likert five-point scale as well. Participants tended to appreciate the realism and functionality of the proposed simulation platform, with Cronbach’s α of 0.67. The details are shown in Table 6. Specifically, most participants (16 out of 18) stated that the visual impact provided by the simulator is excellent, and the manual mode is easy to operate. Over half of the participants agreed that such a simulation platform could also be useful for training purposes and the introduction of AVs.

5. Conclusion

It is of vital importance to better understand the AV’s revolutionary driving styles to facilitate user adoption. Our study provides a complementary approach for AV-involved behavioral analysis based on the naturalistic simulation platform, attempting to fill the existing research gaps in preparing close-to-reality tools for AV experience and further understanding human behavior during high-level autonomous driving. In this paper, we developed an AV model based on an integrated ANN-based framework for specific behavioral analysis to examine the difference in driving patterns between manual and autonomous driving modes from the driver’s perspective. Compared to the AV control models developed by other mainstream approaches, such as deep reinforcement learning (Xiong et al., 2016; Peng et al., 2021), similar results concerning vehicle operation, traffic efficiency and obstacle detection were obtained from our model. Taking advantage of simulation strategies and VR technology, we uncovered the detailed procedure of the AV development, recreated the critical driving scenarios in a risk-free manner and conducted the analysis of human–machine interaction in an immersive environment. Both practical and research implications can be derived from the proposed approach, including the AV simulation, 3D models and VR experiments.

5.1 Key findings

First, this study developed an AV control model with the assistance of the integrated simulation platform. The flexibility and extensibility of the VR technology allow lifelike scenarios to be recreated, which is effective in providing naturalistic testing scenes without being exposed to actual damages as in an empirical experiment. The integration between Unity and VISSIM makes it possible to generate complex driving scenarios to reproduce situational contexts. Especially, the proposed approach features a two-way communication between 3D environments and VISSIM so that more reliable traffic dynamics can be generated in a high-fidelity scene. Compared to the existing AV development approaches (Kuutti et al., 2020), the proposed method can improve the quality of the simulation environment. Based on the performance of the AV control model, the results reflected that:

  • the AV is able to maintain small headway with its preceding vehicles at a stable velocity and acceleration rate. An extremely low accident rate (< 0.1) during autonomous driving can be identified;

  • the commonly recognized TTC threshold (1.5 s) should be adjusted for AV-involved safety analysis. TTC less than 1 s could be a typical situation during AV operations; and

  • an inverse relationship between conflict probability and crash occurrence can be identified based on the process of TTC with a 1.5 threshold.

Interestingly, despite a smaller TTC suggesting a high crash risk (> 0.7), no collisions ultimately occurred during the autopilot mode. The safety margin of an AV tended to become smaller than a conventional car, given the improved control and quicker reaction time. Thus, a more appropriate interpretation of surrogate safety measures for AV-involved safety assessment is expected.

Second, the human-in-the-loop analysis linked the AV development and execution. It is interesting to preview people’s perceptions and reactions when interacting with a fully automatic vehicle in certain driving scenes. An integrated VR simulation platform with AV features allows for such investigations, and side effects such as 3D motion sickness are eliminated. Motivated by the previous studies that focused on the user experience of sitting on AVs or a specific driving maneuver such as take-over on an arterial (Sportillo et al., 2018; Morra et al., 2019; Zou et al., 2021), we moved one step further to investigate the user acceptance of autopilot in the most representative high- and low-speed road networks. Based on the driving experiment, the results reflected that:

  • the main difference in the driving style between human drivers and AVs is the vehicle motion control. Human driver behavior is rather personally specified, but AV’s driving behavior is highly consistent.

  • the AV’s decision-making is based on the calculation, resulting in efficient driving maneuvers under any circumstances. Human drivers’ decision-making is affected by various factors such as traffic conditions and driving experience.

  • the traffic control measurements such as ramp metering might not perfectly suit AV operations. However, the ramp metering does positively affect releasing driving workload at a mandatory freeway merging section and reduced the probability of DIB.

Third, another interesting finding of this study is that perception plays a significant role in users’ decision of intervention during autonomous driving. Even though participants were told the AV is fully automatic with an extremely low accident rate, they still tended to intervene with the autopilot under certain circumstances, especially in the merging cases without ramp metering. More specifically, the results point out that:

  • a balance between transport efficiency and users’ perceptions needs further research. It is important to reduce the probability of intervention behavior during autonomous driving to facilitate user acceptance; and

  • a consistent driving pattern with surrounding vehicles, especially in the mixed traffic environment, can prevent the passenger from performing intervention behavior during the autopilot mode to some extent.

5.2 Limitations and future work

Although the AV control model in this study was based on self-developed algorithms in hypothetical scenes where the available infrastructure designs for urban roads and freeway on-ramps are fully adapted, the AV performed in an expected way as Level 4 autonomy to support the behavioral analysis. In addition, the simulation platform has the flexibility to modify factors, such as the length of the merging lane, multi-merging lanes, and natural effects (i.e. weather conditions), to specific cases for testing. In such cases, additional training may be required to enhance the machine learning-based AV control module. In terms of the human-in-the-loop analysis, the use of a convenience sample restricted the findings to be enlarged to a much broader population, and results might be biased to some extent. Different findings in relation to personal characteristics might be revealed by further and more extensive research. However, the devise driving scenarios and comparative experiment design in this study helped make the results valuable and meaningful. It is also worth reporting that currently, the execution of the simulation and the control model put high pressure on the CPU and GPU of our experiment PC. There is a scope to optimize the command scripts, which contain over a 1,000 lines of coding, to reduce the system processing pressure, thus making it applicable in much larger-scale or more complicated scenarios where computational demand is much higher.

The proposed platform has the potential to conduct AV intelligence tests in relation to various subjects, such as the deployment of the sensing infrastructure, operation of the connected AV fleet or investigation on human–machine interaction. One of the ongoing works by the authors focuses on improving hazard detection of AVs. Using the proposed platform and simulation, it is possible to identify the perceived hazards by human drivers that fail to be recognized by the AVs. Such knowledge is important to gain more insights into AV development (Huang et al., 2016; Van Brummelen et al., 2018; Rosique et al., 2019).

Furthermore, the proposed platform serves as a sound alternative to data generation. In the research field of AV, it is a common issue that data is scarce, and yet, there lacks naturalistic, interconnected and highly detailed databases. Another direction of the authors is to investigate the performance and validity of the generated data from the proposed tool.

Figures

The development procedure of the simulation platform

Figure 1

The development procedure of the simulation platform

Details of the VR environments

Figure 2

Details of the VR environments

Machine learning process involved in the study

Figure 3

Machine learning process involved in the study

Working status of the AV after applying the control algorithms

Figure 4

Working status of the AV after applying the control algorithms

Human-in-the-loop experimental setup

Figure 5

Human-in-the-loop experimental setup

The basic workflow of the AV development and evaluation (AV: autonomous vehicle; CV: conventional vehicle)

Figure 6

The basic workflow of the AV development and evaluation (AV: autonomous vehicle; CV: conventional vehicle)

Machine learning progress in the AV development

Figure 7

Machine learning progress in the AV development

Comparison between the developed AV and human-controlled vehicle in relation to

Figure 8

Comparison between the developed AV and human-controlled vehicle in relation to

Performance of the developed AV in different simulation environments

Figure 9

Performance of the developed AV in different simulation environments

Difference in driving behavior between CV and AV modes

Figure 10

Difference in driving behavior between CV and AV modes

Typical observed collisions

Figure 11

Typical observed collisions

Difference of the hazard perception between CV and AV modes

Figure 12

Difference of the hazard perception between CV and AV modes

Difference of the conflict probability concerning freeway merging between manual (a) and AV (b) modes: and DIB during the experiment (c)

Figure 13

Difference of the conflict probability concerning freeway merging between manual (a) and AV (b) modes: and DIB during the experiment (c)

Participants’ average heart rate distribution during the experiment

Figure 14

Participants’ average heart rate distribution during the experiment

The AV’s key characteristics

Variables Description Range
Sensors Virtually achieved by programming to detect surrounding and generate the inputs such as distance values to the ANN Sensing coverage of 120o
Sensing radius of 8 m
Speed The AV’s moving speed is determined by presupposed traffic rules and traffic conditions 0 − speed limit
Speed limit The speed limit is presupposed with regard to traffic rules 50km/h on urban roads
80 km/h on freeway
Engine torque In the reasonable range of a mainstream car 350 N * M
Wheel diameter The common size for a sedan 0.35 m

Questionnaire content

Questionnaire type Application Content
Demographics Before the experiment • Gender
• Age
• Driving experience (years)
• Average kms driven per year (“less than 1,000km” – “more than 15,000 km”)
• Previous experience with VR (“never used” – “more than 10 times”)
SSQ Before and after each driving task Assess general discomforts based on subscales (“none” – “sever”), including:
• fatigue, headache, eyestrain, sweating, nausea, dizzy, vertigo, burping, etc.
Safety perception After the experiment Based on the Likert 5 scale (“strongly disagree (1)” – “strongly agree (5)”), including:
• whether nervousness was experienced
• whether the driving occurred smoothly
• whether the autopilot is safer than manual driving
• which scene is the safest/most dangerous
VDL After the experiment Based on the Likert 5 scale (“strongly disagree (1)” – “strongly agree (5)”), including:
• whether VR provided decent visualization
• whether the simulator was difficult to operate
• whether it is good for AV simulation
• whether it is useful for improving driving skills

Results of the demographic survey

Content Frequency (%)
Gender
Male 12 66.7
Female 6 33.3
Age
18–25 5 27.8
26–35 7 38.9
36–45 4 22.2
46–55 2 11.1
Driving experience (years)
1–5 5 27.8
6–10 4 22.2
> 10 6 33.3
> 20 3 16.7
Average kms driven per year
< 1,000 1 5.6
1,001–5, 000 6 33.3
5,001–10, 000 5 27.8
> 10,000 6 33.3
VR Experience
Never Used 5 27.8
< 3 9 50.0
> 3 4 22.2

Results of the safety perception survey

Safety perception Manual driving mode Cronbach’s α (ρT) Autopilot mode Cronbach’s αT)
Mean SD Mean SD
Nervousness during driving tasks 3.32 0.85 0.63 4.39 0.52 0.62
Smooth driving 3.72 1.31 4.13 0.78
Safety of the driving mode 3.77 1.03 1.54 1.08
Intervention of the autopilot 4.82 0.47

Results of the SSQ survey

Symptoms Before experiment After experiment
Mean SD Mean SD
Vertigo 0.17 0.37 0.42 0.76
Dizzy (eye closed) 0.08 0.28 0.33 0.62
Dizzy (eye open) 0.08 0.28 0.08 0.28
Blurred vision 0.08 0.28 0.17 0.37
Fullness of head 0.08 0.28 0.08 0.28
Burping 0.08 0.28 0.33 0.62
Nausea 0.08 0.28 0.08 0.28
Fatigue 0.17 0.37 0.17 0.37
Sweating 0.08 0.28 0.33 0.62
General discomfort 0.25 0.43 0.33 0.47

Participants’ feedback regarding simulator realism

Simulator realism The VR experiment Cronbach’s αT)
Mean SD
Good visualization 4.13 0.69 0.67
Difficult to operate 0.23 0.80
Help beginners 3.94 0.84
Improve driving skills 3.99 0.81
Good for AV simulation 4.17 0.87
More impressive 4.27 0.92

References

Adnan, N., Nordin, S.M., bin Bahruddin, M.A. and Ali, M. (2018), “How trust can drive forward the user acceptance to the technology? In-vehicle technology for autonomous vehicle”, Transportation Research Part A: Policy and Practice, Vol. 118, pp. 819-836.

Bonnefon, J.F., Shariff, A. and Rahwan, I. (2016), “The social dilemma of autonomous vehicles”, Science, Vol. 352 No. 6293, pp. 1573-1576.

Brown, B., Park, D., Sheehan, B., Shikoff, S., Solomon, J., Yang, J. and Kim, I. (2018), “Assessment of human driver safety at dilemma zones with automated vehicles through a virtual reality environment”, 2018 Systems and Information Engineering Design Symposium (SIEDS), IEEE, pp. 185-190.

Bureau of Infrastructure and Transport Research Economics (BITRE) (2022), “Road trauma Australia 2021 statistical summary”, BITRE, Canberra ACT.

Campbell, S., O'Mahony, N., Krpalcova, L., Riordan, D., Walsh, J., Murphy, A. and Ryan, C. (2018), “Sensor technology in autonomous vehicles: a review”, 2018 29th Irish Signals and Systems Conference (ISSC), IEEE, pp. 1-4.

Choi, J.K. and Ji, Y.G. (2015), “Investigating the importance of trust on adopting an autonomous vehicle”, International Journal of Human-Computer Interaction, Vol. 31 No. 10, pp. 692-702.

Da Silva, IN., Spatti, D.H., Flauzino, R.A., Liboni, L.H.B. and dos Reis Alves, S.F. (2017), “Artificial neural network architectures and training processes”, Artificial Neural Networks, Springer.

Daamen, W., Loot, M. and Hoogendoorn, S.P. (2010), “Empirical analysis of merging behavior at freeway on-ramp”, Transportation Research Record: Journal of the Transportation Research Board, Vol. 2188 No. 1, pp. 108-118.

Detjen, H., Faltaous, S., Pfleging, B., Geisler, S. and Schneegass, S. (2021), “How to increase automated vehicles’ acceptance through in-vehicle interaction design: a review”, International Journal of Human–Computer Interaction, Vol. 37 No. 4, pp. 308-330.

Discover.data.vic.gov.au (2021), “Typical hourly traffic volume – victorian government data directory”, available at: https://discover.data.vic.gov.au/dataset/typical-daily-traffic-volume-profile (accessed 15 December 2021).

Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A. and Koltun, V. (2017), “CARLA: an open urban driving simulator”, Conference on robot learning, PMLR, pp. 1-16.

Duarte, F. and Ratti, C. (2018), “The impact of autonomous vehicles on cities: a review”, Journal of Urban Technology, Vol. 25 No. 4, pp. 3-18.

Ejercito, P.M., Nebrija, K.G.E., Feria, R.P. and Lara-Figueroa, L.L. (2017), “Traffic simulation software review”, 2017 8th International Conference on Information, Intelligence, Systems & Applications (IISA), IEEE, pp. 1-4.

Elmquist, A. and Negrut, D. (2020), “Methods and models for simulating autonomous vehicle sensors”, IEEE Transactions on Intelligent Vehicles, Vol. 5 No. 4, pp. 684-692.

Fang, J., Zhou, D., Yan, F., Zhao, T., Zhang, F., Ma, Y., Wang, L. and Yang, R. (2020), “Augmented LiDAR simulator for autonomous driving”, IEEE Robotics and Automation Letters, Vol. 5 No. 2, pp. 1931-1938.

Feng, S., Yan, X., Sun, H., Feng, Y. and Liu, H.X. (2021), “Intelligent driving intelligence test for autonomous vehicles with naturalistic and adversarial environment”, Nature Communications, Vol. 12, pp. 1-14.

Gao, S., Paulissen, S., Coletti, M. and Patton, R. (2021), “Quantitative evaluation of autonomous driving in CARLA”, 2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops), IEEE, pp. 257-263.

Gettman, D. and Head, L. (2003), “Surrogate safety measures from traffic simulation models”, Transportation Research Record: Journal of the Transportation Research Board, Vol. 1840 No. 1, pp. 104-115.

Goedicke, D., Li, J., Evers, V. and Ju, W. (2018), “Vr-oom: virtual reality on-road driving simulation”, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-11.

Goh, J., Hu, S. and Fang, Y. (2019), “Human-in-the-loop simulation for crane lift planning in modular construction on-site assembly”, Computing in Civil Engineering 2019: Visualization, Information Modeling, and Simulation, American Society of Civil Engineers, Reston, VA, pp. 71-78.

Gu, T. and Dolan, J.M. (2012), “On-road motion planning for autonomous vehicles”, International Conference on Intelligent Robotics and Applications, Springer, pp. 588-597.

Haarnoja, T., Zhou, A., Abbeel, P. and Levine, S. (2018), “Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor”, International conference on machine learning, PMLR, pp. 1861-1870.

Henne, M., Schwaiger, A. and Weiss, G. (2019), “Managing uncertainty of AI-based perception for autonomous systems”, AISafety@ IJCAI.

Hoogendoorn, R., van Arerm, B. and Hoogendoom, S. (2014), “Automated driving, traffic flow efficiency, and human factors: literature review”, Transportation Research Record: Journal of the Transportation Research Board, Vol. 2422 No. 1, pp. 113-120.

Huang, W., Wang, K., Lv, Y. and Zhu, F. (2016), “Autonomous vehicles testing methods review”, 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), IEEE, pp. 163-168.

Hussein, A., Gaber, M.M., Elyan, E. and Jayne, C. (2017), “Imitation learning: a survey of learning methods”, ACM Computing Surveys, Vol. 50 No. 2, pp. 1-35.

Jing, P., Xu, G., Chen, Y., Shi, Y. and Zhan, F. (2020), “The determinants behind the acceptance of autonomous vehicles: a systematic review”, Sustainability, Vol. 12 No. 5, p. 1719.

Juliani, A., Berges, V.P., Teng, E., Cohen, A., Harper, J., Elion, C., Goy, C., Gao, Y., Henry, H., Mattar, M. and Lange, D. (2018), “Unity: a general platform for intelligent agents”, arXiv preprint arXiv:1809.02627.

Kalra, N. and Paddock, S.M. (2016), “Driving to safety: how many miles of driving would it take to demonstrate autonomous vehicle reliability?”, Transportation Research Part A: Policy and Practice, Vol. 94, pp. 182-193.

Kennedy, R.S., Lane, N.E., Berbaum, K.S. and Lilienthal, M.G. (1993), “Simulator sickness questionnaire: an enhanced method for quantifying simulator sickness”, The International Journal of Aviation Psychology, Vol. 3 No. 3, pp. 203-220.

Kocić, J., Jovičić, N. and Drndarević, V. (2018), “Sensors and sensor fusion in autonomous vehicles”, 2018 26th Telecommunications Forum (TELFOR), IEEE, pp. 420-425.

Kosecka, J., Blasi, R., Taylor, C.J. and Malik, J. (1997), “Vision-based lateral control of vehicles”, Proceedings of Conference on Intelligent Transportation Systems, IEEE, pp. 900-905.

Kriegeskorte, N. and Golan, T. (2019), “Neural network models and deep learning”, Current Biology, Vol. 29 No. 7, pp. R231-R236.

Kuribayashi, A., Takeuchi, E., Carballo, A., Ishiguro, Y. and Takeda, K. (2021), “A recognition phase intervention interface to improve naturalness of autonomous driving for distracted drivers”, 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), IEEE, pp. 1737-1744.

Kuutti, S., Bowden, R., Jin, Y., Barber, P. and Fallah, S. (2020), “A survey of deep learning applications to autonomous vehicle control”, IEEE Transactions on Intelligent Transportation Systems, Vol. 22 No. 2, pp. 712-733.

Lee, D.N. (1976), “A theory of visual control of braking based on information about time-to-collision”, Perception, Vol. 5 No. 4, pp. 437-459.

Li, W., Pan, C., Zhang, R., Ren, J., Ma, Y., Fang, J., Yan, F., Geng, Q., Huang, X. and Gong, H. (2019), “AADS: augmented autonomous driving simulation using data-driven algorithms”, Science Robotics, Vol. 4 No. 28, p. eaaw0863.

Lim, K.L., Whitehead, J., Jia, D. and Zheng, Z. (2021), “State of data platforms for connected vehicles and infrastructures”, Communications in Transportation Research, Vol. 1, p. 100013.

Liu, S. (2020), Engineering Autonomous Vehicles and Robots: The DragonFly Modular-Based Approach, John Wiley & Sons.

Morando, M.M., Tian, Q., Truong, L.T. and Vu, H.L. (2018), “Studying the safety impact of autonomous vehicles using simulation-based surrogate safety measures”, Journal of Advanced Transportation, Vol. 2018.

Morra, L., Lamberti, F., Pratticó, F.G., La Rosa, S. and Montuschi, P. (2019), “Building trust in autonomous vehicles: role of virtual reality driving simulators in HMI design”, IEEE Transactions on Vehicular Technology, Vol. 68 No. 10, pp. 9438-9450.

Nunnally, J. and Bernstein, I. (1994), “The assessment of reliability”, Psychometric Theory, 3rd ed., McGraw-Hill, New York, NY, pp. 3, 248-292.

NVIDIA (2017), “NVIDIA DRIVE CONSTELLATION: virtual reality autonomous vehicle simulator”, available at: https://developer.nvidia.com/drive/drive-constellation (accessed 10 December 2021).

Peng, B., Keskin, M.F., Kulcsár, B. and Wymeersch, H. (2021), “Connected autonomous vehicles for improving mixed traffic efficiency in unsignalized intersections with deep reinforcement learning”, Communications in Transportation Research, Vol. 1, p. 100017.

Pérez-Gil, O., Barea, R., López-Guillén, E., Bergasa, L.M., Gómez-Huelamo, C., Gutiérrez, R. and Díaz-Díaz, A. (2022), “Deep reinforcement learning based control for autonomous vehicles in CARLA”, Multimedia Tools and Applications, Vol. 81 No. 3, pp. 1-24.

Ramadhan, S.A., Joelianto, E. and Sutarto, H.Y. (2019), “Simulation of traffic control using Vissim-COM interface”, Internetworking Indonesia Journal, Vol. 11 No. 1, pp. 55-61.

Reza, M., Choudhury, S., Dash, J.K. and Roy, D.S. (2020), “An ai-based real-time roadway-environment perception for autonomous driving”, 2020 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-Taiwan), IEEE, pp. 1-2.

Rong, G., Shin, B.H., Tabatabaee, H., Lu, Q., Lemke, S., Možeiko, M., Boise, E., Uhm, G., Gerow, M. and Mehta, S. (2020), “Lgsvl simulator: a high fidelity simulator for autonomous driving”, 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), IEEE, pp. 1-6.

Rosique, F., Navarro, P.J., Fernández, C. and Padilla, A. (2019), “A systematic review of perception system and simulators for autonomous vehicles research”, Sensors, Vol. 19 No. 3, p. 648.

SAE International (2018), “Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles (J3016_201806)”, available at: www.sae.org/standards/content/j3016_201806/ (accessed 1 December 2021).

Schulman, J., Wolski, F., Dhariwal, P., Radford, A. and Klimov, O. (2017), “Proximal policy optimization algorithms”, arXiv preprint arXiv:1707.06347.

Shah, S., Dey, D., Lovett, C. and Kapoor, A. (2018), “Airsim: high-fidelity visual and physical simulation for autonomous vehicles”, Field and Service Robotics, Springer, pp. 621-635.

Soteropoulos, A., Berger, M. and Ciari, F. (2019), “Impacts of automated vehicles on travel behaviour and land use: an international review of modelling studies”, Transport Reviews, Vol. 39 No. 1, pp. 29-49.

Sportillo, D., Paljic, A. and Ojeda, L. (2018), “Get ready for automated driving using virtual reality”, Accident Analysis & Prevention, Vol. 118, pp. 102-113.

Thorn, A. (2020), “3D lighting and materials”, Moving from Unity to Godot, Apress, Berkeley, CA, pp. 161-200.

Uteshev, A.Y. and Goncharova, M.V. (2018), “Point-to-ellipse and point-to-ellipsoid distance equation analysis”, Journal of Computational and Applied Mathematics, Vol. 328, pp. 232-251.

Van Brummelen, J., O’Brien, M., Gruyer, D. and Najjaran, H. (2018), “Autonomous vehicle perception: the technology of today and tomorrow”, Transportation Research Part C: Emerging Technologies, Vol. 89, pp. 384-406.

Van Der Horst, R. and Hogema, J. (1993), “Time-to-collision and collision avoidance systems”, Proceedings of the 6th ICTCT Workshop.

Van Der Laan, J.D., Heino, A. and De Waard, D. (1997), “A simple procedure for the assessment of acceptance of advanced transport telematics”, Transportation Research Part C: Emerging Technologies, Vol. 5 No. 1, pp. 1-10.

Vicroads (2022), “Coordinated ramp signals”, available at: www.vicroads.vic.gov.au/traffic-and-road-use/traffic-management/managed-motorways/coordinated-ramp-signals (accessed 14 December 2021).

Vicroads.vic.gov.au (2021), “Crash statistics: VicRoads”, available at: www.vicroads.vic.gov.au/safety-and-road-rules/safety-statistics/crash-statistics (accessed 15 December 2021).

Wang, Z., Fang, J., Dai, X., Zhang, H. and Vlacic, L. (2020), “Intelligent vehicle self-localization based on double-layer features and multilayer LIDAR”, IEEE Transactions on Intelligent Vehicles, Vol. 5 No. 4, pp. 616-625.

WHO, V. (2018), “Global status report on road safety 2018”, World Health Organization.

Wilson, K.M., Yang, S., Roady, T., Kuo, J. and Lenné, M.G. (2020), “Driver trust & mode confusion in an on-road study of level-2 automated vehicle technology”, Safety Science, Vol. 130, p. 104845.

Wynne, R.A., Beanland, V. and Salmon, P.M. (2019), “Systematic review of driving simulator validation studies”, Safety Science, Vol. 117, pp. 138-151.

Xiong, X. Wang, J. Zhang, F. and Li, K. (2016), “Combining deep reinforcement learning and safety based control for autonomous driving”, arXiv preprint arXiv:1612.00147.

Xu, Z. and Zheng, N. (2021), “Incorporating virtual reality technology in safety training solution for construction site of urban cities”, Sustainability, Vol. 13 No. 1, p. 243.

Xu, Z., Zou, X., Oh, T. and Vu, H.L. (2021), “Studying freeway merging conflicts using virtual reality technology”, Journal of Safety Research, Vol. 76, pp. 16-29.

Yousfi, E., Malin, S., Halit, L., Roger, S. and Dogan, E. (2021), “Driver experience and safety during manual intervention in a simulated automated vehicle: influence of longer time margin allowed by connectivity”, European Conference on Cognitive Ergonomics 2021, pp. 1-7.

Yu, B., Bao, S., Zhang, Y., Sullivan, J. and Flannagan, M. (2021), “Measurement and prediction of driver trust in automated vehicle technologies: an application of hand position transition probability matrix”, Transportation Research Part C: Emerging Technologies, Vol. 124, p. 102957.

Zheng, L., Sayed, T. and Mannering, F. (2021), “Modeling traffic conflicts for use in road safety analysis: a review of analytic methods and future directions”, Analytic Methods in Accident Research, Vol. 29, p. 100142.

Zou, X., O'Hern, S., Ens, B., Coxon, S., Mater, P., Chow, R., Neylan, M. and Vu, H.L. (2021), “On-road virtual reality autonomous vehicle (VRAV) simulator: an empirical study on user experience”, Transportation Research Part C: Emerging Technologies, Vol. 126, p. 103090.

Further reading

Bahaweres, R.B., Pratama, A. and Sitohang, B. (2014), “Implementation of physics simulation in serious game”, 2014 International Conference on ICT For Smart Society (ICISS), IEEE, pp. 1-6.

Edie, L.C. and Baverez, E. (1967), “Generation and propagation of stop-start traffic waves”, Vehicular Traffic Science. In Proceedings of the Third International Symposium on the Theory of Traffic FlowOperations Research Society of America.

Mehler, B., Reimer, B. and Wang, Y. (2011), “A comparison of heart rate and heart rate variability indices in distinguishing single-task driving and driving under secondary cognitive workload”, Proceedings of the sixth international driving symposium on human factors in driver assessment, training and vehicle design, pp. 590-597.

Taelman, J., Vandeput, S., Spaepen, A. and Huffel, S.V. (2009), “Influence of mental stress on heart rate and heart rate variability”, 4th European conference of the international federation for medical and biological engineering, Springer, Berlin, Heidelberg, pp. 1366-1369.

Thayer, J.F., Åhs, F., Fredrikson, M., Sollers, J.J., III and Wager, T.D. (2012), “A meta-analysis of heart rate variability and neuroimaging studies: implications for heart rate variability as a marker of stress and health”, Neuroscience & Biobehavioral Reviews, Vol. 36 No. 2, pp. 747-756.

Acknowledgements

All authors would like to declare that there is no conflict of interest, e.g. there are no financial and personal relationships with other people or organizations that could inappropriately influence (bias) this work.

Corresponding author

Zheng Xu can be contacted at: Zheng.Xu3@monash.edu

About the authors

Zheng Xu received his BSc and MSc degrees in Transportation Engineering from the Central South University (China) and Monash University (Australia) in 2017 and 2019, respectively. As a PhD candidate in the Faculty of Civil Engineering, Monash University, his research is focused on autonomous vehicles involved simulation studies, 3D modeling and VR-enabled human–machine interaction studies.

Yihai Fang received his BSc and PhD in Civil Engineering from Tongji University in 2011 and the Georgia Institute of Technology in 2016, respectively. He then worked as a Postdoctoral Associate at the College of Design, Construction and Planning at the University of Florida. Dr Yihai Fang’s research interests include construction automation and informatics, construction robotics, digital twin for construction and built environments and construction safety and human factors.

Nan Zheng received his BSc, MSc and PhD degrees in Transportation Engineering from the Southeast University (China), Delft University of Technology, École polytechnique fédérale de Lausanne (EPFL), in 2007, 2009 and 2014, respectively. From November 2014 to August 2015, September 2015 to September 2016, he was appointed as a Post-Doc by EPFL and the Federal Institute of Technology Zurich (ETHZ). Since October 2016, he was with Beihang University in Beijing as an Associate Professor in the School of Transportation. He recently joined the Institute of Transport Studies at Monash University as a Senior Lecturer. Dr Nan Zheng’s research interests include traffic flow theory, traffic big data, urban traffic operation and control, multi-modal and new-generation traffic management. Nan Zheng is corresponding author and can be contacted at: Nan.Zheng@monash.edu

Hai L. Vu is currently a Professor of intelligent transport system (ITS) and the Director of the Faculty of Engineering, Monash Institute of Transport Studies Monash University, Australia. He is a leading expert with 20 years of experience in the ITS field, who has authored or coauthored over 200 scientific journal articles and conference papers in the data and transportation network modeling, V2X communications and connected autonomous vehicles (CAVs). His research interests include modeling, performance analysis and design of complex networks, stochastic optimization and control with applications to connected autonomous vehicles, network planning and mobility management.

Related articles