Human Factors of Remotely Operated Vehicles: Volume 7

Subject:

Table of contents

(38 chapters)

The editors of this volume would like to thank the authors whose contributions to this area have broken new ground for human considerations in a system that is often mistaken as unmanned. We would also like to thank the attendees of our two workshops on human factors of UAVs who shared their insights and scientific accomplishments with us as well as for those from the development community who conveyed to us the constraints and needs of their community. Thanks also to the sponsors of these workshops who include the Air Force Research Laboratory, the Air Force Office of Scientific Research, NASA, US Positioning, FAA, and Microanalysis and Design. We also thank the many individuals including Leah Rowe, Jennifer Winner, Jamie Gorman, Preston Kiekel, Amanda Taylor, Dee Andrews, Pat Fitzgerald, Ben Schaub, Steve Shope, and Wink Bennett who provided their valuable time and energy to assist with the workshops and this book. Last but not least, we wish to thank ROV operators, those who have attended our workshops, those who we have come to know only through anecdotes, and those who we will never know. It is this group that truly inspired the workshops and the book. We dedicate this effort to them.

UAVs or unmanned (or the more politically correct, “unpiloted” or “uninhabited”) Aerial Vehicles and the broader class of remotely operated vehicles (ROVs) have attracted much attention lately from the military, as well as the general public. Generally, ROVs are vehicles that do not carry human pilots or operators, but instead are controlled remotely with different degrees of autonomy on the part of the vehicle. The role of UAVs in the military has rapidly expanded over the years such that every branch of the U.S. military deploys some form of UAV in their intelligence, surveillance, and reconnaissance operations. Recent U.S. military successes include a USAF Predator UAV operating in Iraq, but piloted by a team at Nellis AFB (now Creech AFB) in Las Vegas, Nevada, which successfully aided in finding Saddam Hussein (Rogers, 2004). Another more recent example took place in August 2004 when a Predator UAV armed with Hellfire missiles, also controlled from Nellis AFB, rescued a group of U.S. Marines pinned down by sniper fire in Najaf, Iraq (CNN, 2005). The value of UAVs is recognized by other nations as well who have active UAV programs including, but not limited to, Germany, England, China, France, Canada, South Africa, and Israel.

The great success of unmanned aerial vehicles (UAVs) in performing near-real time tactical, reconnaissance, intelligence, surveillance and other various missions has attracted broad attention from military and civilian communities. A critical contribution to the increase and extension of UAV applications, resides in the separation of pilot and vehicle allowing the operator to avoid dangerous and harmful situations. However, this apparent benefit has the potential to lead to problems when the role of humans in remotely operating “unmanned” vehicles is not considered. Although, UAVs do not carry humans onboard, they do require human control and maintenance. To control UAVs, skilled and coordinated work of operators on the ground is required.

The Cognitive Engineering Research Institute's First Annual Human Factors of unmanned aerial vehicles (UAVs) Workshop, held on May 24–25, 2004 in Chandler Arizona, and Second Annual Human Factors of UAVs Workshop, held on May 25–26, 2005 in Mesa Arizona, brought to light many human factors issues regarding the technology and operation of UAVs. An integral part of the event was the involvement of military UAV operators from the U.S. Air Force (USAF), U.S. Navy, and U.S. Army. The involvement of UAV operators in the workshops was valuable in linking developers and human factors researchers in the improvement of UAV systems and operations – a practice that is too often implemented only after a system is deployed and the problems are found. The experience of operators serves as a “user's account” of the issues and problems concerning the operation of UAVs. The fact that operators have had first hand experience in operating UAVs provides a unique perspective to the problem of identifying the most pressing human factors issues. The purpose of this chapter is to highlight the perspectives of two UAV operators that helped to set the tone for the entire First Annual Human Factors of UAVs Workshop.

When we take a top-down approach to understanding issues surrounding ROV implementation, we can employ the metaphor either literally or as a form of abstraction hierarchy (Rasmussen, 1986). Literally, the military's necessity for moment-to-moment information mandates a suite of context-specific technological capabilities for sensor and effector systems. This suite includes but is not limited to systems in outer space (such as geo-synchronized orbiting platforms), high altitude atmospheric systems (such as Global Hawk) and other craft which operate less than hundreds of feet from earth down to almost ground level itself.

UAVs have been used by military forces since at least the War of Attrition – fought between Egypt and Israel between 1967 and 1970 – when the Israeli Army modified radio-controlled model aircraft to fly over the Suez Canal and take aerial photographs behind Egyptian lines (Bolia, 2004). Although the Israelis ill advisedly abandoned the concept before the Yom Kippur War, it was taken up by several nations in the ensuing decades, and today UAVs are regarded as a routine component of surveillance operations, having played a significant role in both Afghanistan and Iraq.

The most important advance in system design is the development of modeling and simulation methods to predict complex performance before prototypes are developed. New systems are developed in a spiraling approach; as more is learned about the system, design changes are proposed and evaluated. This approach allows the engineering team to “spin out” early versions of the system for preliminary evaluation, permitting changes to be made to the system design without incurring unacceptable cost. Because of the complexity of human performance, current modeling techniques provide only a first approximation. However, it has been demonstrated that even simple, inexpensive modeling approaches are useful in uncovering workload and performance problems related to developing systems (Barnes & Beevis, 2003). More important, these models can serve as the basis for operator simulation experiments that verify and also calibrate the original models. Furthermore, early field tests and system of systems demonstrations that can validate these results under actual conditions are becoming an increasingly significant part of the early design process. Fig. 1 illustrates this interdependence indicating a spiraling process throughout the design starting with simple predictive methods and progressing to more expensive validation methods. These iterations should continue until most of the soldier's variance is accounted for, and before any formal soldier testing is conducted. Fig. 1 presents the ideal combination of techniques; not all systems can be evaluated this thoroughly but more cost-effective modeling and simulation tools combined with realistic field exercises should make this approach more the norm as future unmanned systems are developed (Barnes & Beevis, 2003). In the remainder of this chapter, several case studies are presented to illustrate how the techniques in Fig. 1 have been applied in UAV programs.

UAVs have become a critical component of U.S. military operations, reducing the need to risk the life of a pilot, while performing tasks considered dull, dirty, and dangerous. UAVs currently are serving an important intelligence, surveillance, search, and destroy function in Operation Enduring Freedom. Since September 11, 2001, the public has increasingly been made aware of the role that military UAVs, such as the Predator and Global Hawk, play.

The successful culmination of missions based on Unmanned Aerial Vehicles (UAV) can be measured with two main parameters: (1) successful mission completion: all objectives of the mission (e.g., maneuvering and navigation, reconnaissance and targeting or search and rescue, and return) were accomplished and (2) safety: no damage to the vehicle and no fatalities or injuries to any human were sustained throughout the mission. Automation of the UAV's control and operations increasingly becomes a determining factor in successful mission completion and increased safety. However, in this day and age of automatically launched and retrieved swarms of UAVs, the human operator still has a critical role. Human-controlled UAVs will persist for a long time and human error is a factor that still needs addressing in the age of automation. Even a single person, who has flown radio-controlled model aircraft as a hobby since childhood, can still cause the crash of an expensive UAV in a matter of seconds. Moreover, there are aspects of human error in UAV control that can have important implications to the implementation of automation and to keeping the human operator in the control loop.

The most basic solution for monitoring position and attitude of an UA is through direct line-of-sight. Because they are usually standing outside, a pilot that maintains direct line-of-sight with the aircraft is usually referred to as the EP, as opposed to an internal pilot (IP) who obtains position and attitude information electronically while inside of a ground control station (GCS). Flight using an EP represents the most basic solution to the problem of separating the pilot from the aircraft while still enabling the pilot to monitor the location and attitude of the aircraft. Pilot perspective is changed from an egocentric to an exocentric point of view. Maintaining visual contact with the UA, the EP can control the aircraft using a hand-held radio control box. Many of these control boxes are similar to those used by radio-controlled aircraft hobbyists and provide direct control of the flight surfaces of the aircraft through the use of joysticks on the box. Very little automation is involved in the use of such boxes, which control the flight surfaces of the aircraft.

DoD accidents are classified according to the severity of injury, occupational illness, and vehicle and/or property damage costs (Department of Defense, 2000). All branches of the military have similar accident classification schemes, with Class A being the most severe. Table 1 shows the accident classes for the Army. The Air Force and Navy definitions of Class A–C accidents are very similar to the Army's definition. However, they do not have a Class D. As the total costs of some Army UAVs are below the Class A criteria ($325,000 per Shadow aircraft; Schaefer, 2003), reviewers have begun to add Class D data into their analyses (Manning, Rash, LeDuc, Noback, & McKeon, 2004; Williams, 2004).

SD is defined as a failure to sense correctly the attitude, motion, and/or position of the aircraft with respect to the surface of the earth (Benson, 2003). The types of SD are generally thought to be “unrecognized” and “recognized” (Previc & Ercoline, 2004). Although a third type has been reported (incapacitating), this type seems irrelevant to UAV operations. Unrecognized SD occurs when the person at the controls is unaware that a change in the motion/attitude of the aircraft has taken place. The cause is often the result of a combination of sub-threshold motion and inattention. This type of SD is known to be the single most serious human factors reason for aircraft accidents today, accounting for roughly 90% of all known SD-related mishaps (Davenport, 2000). Recognized SD occurs when a noticeable conflict is created between the actual motion/attitude of the aircraft and any one of the common physiological sensory mechanisms (e.g., visual, vestibular, auditory, and tactile). Recognized SD is the most common type of SD, accounting for the remaining SD-related accidents.

The ROV ground control simulator (Fig. 1) used in this multi-sensory research consists of two workstations: pilot and SO. At the left workstation, the pilot controls ROV flight (via stick-and-throttle inputs as well as invoking auto-holds), manages subsystems, and handles external communications. From the right workstation, the SO is responsible for locating and identifying points of interest on the ground by controlling cameras mounted on the ROV. Each station has an upper and a head-level 17″ color CRT display, as well as two 10″ head-down color displays. The upper CRT of both stations displays a ‘God's Eye’ area map (fixed, north up) with overlaid symbology identifying current ROV location, flight waypoints, and current sensor footprint. The head-level CRT (i.e., “camera display”) displays simulated video imagery from cameras mounted on the ROV. Head-up display (HUD) symbology is overlaid on the pilot's camera display and sensor specific data are overlaid on the SO's camera display. The head-down displays present subsystem and communication information as well as command menus. The simulation is hosted on four dual-Pentium PCs. The control sticks are from Measurement Systems Inc. and the throttle assemblies were manufactured in-house.

The goal of our initial study was to assess the usability of the prototype OCU and establish associated training issues. Seven participants completed self-paced training, guided by a training manual produced by ARI. Training was divided into three modules: (1) introduction and autonomous control, (2) manual control, and (3) creating and editing autonomous missions. Primary focus was on how to use the OCU to fly the MAV. Modules did not include elements such as fueling, setup, or tactics. A facilitator was present at all times to observe user interaction with the system and to manage the software. Data captured included time to complete each training module and related practical exercises, user feedback on questionnaires, and a written test on training content. Participants had either graduate-level experience in human factors psychology, prior military experience, or both. They were, therefore, able to provide valuable insights while they learned to operate the simulated MAV.

If these tasks are broken down by aircrew position, the pilot (often called the operator) is the prime command and control coordinator, while the second crew member (the sensor operator) is responsible for collecting, processing and communicating sensor data. There is often an overlap of duties between these two crew members, but the operation of smaller UAVs is commonly only controlled by these two personnel. An illustration of the ground control shelter interface used by a typical Army UAV pilot (AVO, Air Vehicle Operator) for the US Army Shadow UAV follows (Fig. 1).

Lockheed Martin has been a premier builder and developer of manned aircraft and fighter jets since 1909. Since then, aircraft design has drastically evolved in many areas including the evolution of manual linkages to fly-by-wire systems, and mechanical gauges to glass cockpits. Lockheed Martin's knowledge of manned aircraft has produced a variety of Unmanned Aerial Vehicles (UAVs) based on size/wingspan, ranging from a micro-UAV (MicroStar) to a hand-launched UAV (Desert Hawk) and up to larger platforms such as the DarkStar. Their control systems vary anywhere between remotely piloted to fully autonomous systems. Remotely piloted control is equivalent to full human involvement with an operator controlling all the decisions of the aircraft. Similarly, fully autonomous operations describe a situation that has the human having minimal contact with the platform. Flight path control relies on a set of waypoints for the vehicle to fly through. This is the most common mode of UAV navigation, and GPS has made this form of navigation practical.

Fig. 1 presents the interface used by our pilots to fly the UAV simulation.

As they are currently conducted, missions by single ROVs consist of several sub-tasks. After a vehicle has been launched, a human operator or a small team is responsible for controlling the flight, navigation, status monitoring, flight and mission alteration, problem diagnosis, communication and coordination with other operators, and often data analysis and interpretation. These tasks are similar in terms of their locus of control (e.g., keyboard and mouse input, joystick, trackball, visual display).

Wide area search munitions (WASMs) are a cross between an unmanned aerial vehicle and a munition. With an impressive array of onboard sensors and autonomous flight capabilities WASMs might play a variety of roles on the modern battle field including reconnaissance, search, battle damage assessment, or communications relay.

A fundamental issue driving much of the current research is the design of the interface between humans and ROVs. Autonomous robots are sufficiently different from most computer systems as to require new research and design principles (Adams & Skubic, 2005; Kiesler & Hinds, 2004). Previous work on coordination between humans and automated agents has revealed both benefits and costs of automation for system performance (Parasuraman & Riley, 1997). Automation is clearly essential for the operation of many complex human–machine systems. But in some circumstances automation can also lead to novel problems for operators. Automation can increase workload and training requirements, impair situation awareness and, when particular events co-occur in combination with poorly designed interfaces, lead to accidents (e.g., Degani, 2004; Parasuraman & Riley, 1997).

As a standard procedure of human factors engineering, the design of complex systems (e.g., operator interfaces) starts with analyses of system objectives, missions, functions, and tasks. Perceptual Control Theory (PCT) provides a theoretical framework for guiding this process. PCT is founded on notions from control theory, in which closed-loop, negative-gain, feedback systems can be used to build powerful models of goal-directed behavior and for implementing complex systems (Powers, 1973). One of the strengths of PCT over competing human behavior theories is that it explains how humans can control systems that are subject to a wide variety of external influences. UAV control is through the operators’ interaction with the interfaces in remote control stations. A closed-loop feedback system is crucial for both operators and control systems to understand the states and goals of each other. It is likely that advanced UAV control systems will require operators to interact with automated systems such as IAIs. IAIs are sophisticated and will require knowledge about mission goals, the operators’ goals, and states, as well as the UAV and environmental states. Thus, the methods of analysis used in this research were based on PCT given its engineering origins in control theory and advantages accommodating various external disturbances.

The heart of the CERTT Laboratory, shown in Fig. 1, is a flexible Synthetic Task Environment (STE) that is designed to study many different synthetic tasks for teams working in complex environments. STEs provide an ideal environment for the study of team cognition in complex settings by providing a middle-ground between the highly artificial tasks commonly found in laboratories and the often uncontrollable conditions found in the field or high fidelity simulations.

To understand the importance of coordination and collaboration for ROV teams, let us examine some of the typical tasks that ROV operators might be required to perform (Cooke & Shope, 2004; Gugerty, DeBoom, Walker, & Burns, 1999). To do so, we will use the members of a U.S. Air Force Predator crew as an example. The team consists of three members: an Air Vehicle Operator (AVO) who pilots the aircraft, a Payload Operator (PLO) who operates the surveillance equipment, and a Data Exploitation, Mission Planning, and Communications Operator (DEMPC) who is responsible for mission planning. In the course of a mission, the AVO is responsible for the take off and landing of the aircraft. Because they fly the aircraft from a remote location, AVOs are generally required to use visual input from a camera mounted on the nose of the aircraft to guide their flight. Once in the air, the PLO can operate cameras and sensors mounted on the belly of the plane to gather information. The DEMPC, who is in contact with the upper echelons of the organization, provides the AVO with the desired heading and the PLO with target coordinates.

Today's battlespace is a very complex system of humans and technology. It could be thought of as a system of layers – where there might be a layer of ground operations and a layer of air operations. Within the air operations layer exists two additional layers of manned air operations and unmanned air operations. If you peel back all layers of today's battlespace and just view the “unmanned air operations” layer, you will find another complex system of humans and technology working as just one element of the overall system. This system of uninhabited air operations might consist of different types of uninhabited air vehicles (e.g., Predator, Hunter, etc.) performing different types of missions (e.g., Intelligence, Reconnaissance, Surveillance-IRS; IRS-strike; search and rescue, etc.).

Cognitive Load Theory (CLT) is the product of over a decade of research in the instructional science domain (Chandler & Sweller, 1991; Sweller & Chandler, 1994), and its applications to other areas of inquiry continues to expand (see Cuevas, Fiore, & Oser, 2002; Paas, Renkl, & Sweller, 2003a; Paas, Tuovinen, Tabbers, & Van Gerven, 2003b; Scielzo, Fiore, Cuevas, & Salas, 2004). The core of CLT is based on two sets of what are termed cognitive load factors that are either endogenous or exogenous from the viewpoint of an operator interacting with the environment. Endogenous (or intrinsic) factors are sources of cognitive load in terms of the general amount and complexity of information with which the operator has to interact. In training environments, intrinsic load is directly proportional to the amount of materials that trainees need to acquire. As such, the more complex the information is in terms of volume and conceptual interactivity, the higher the cognitive load will be. In operational settings, high intrinsic load can occur whenever informational demands that need to be processed are high. Within the context of human–robot team environments, there is likely to be unique intrinsic load factors emerging from this hybrid teamwork interaction (e.g., information produced by synthetic team members). Another source of cognitive load comes from exogenous or extraneous factors. In training and operational settings alike, extraneous cognitive load may occur dependent upon the manner in which information needing attention is presented. Specifically, the more complex the human–robot team interface is in relation to the process by which information is displayed and/or communicated, the more extraneous cognitive load can be present. For example, the technological tools involved in the communication of information, and the associated modalities used to process information may inadvertently result in cognitive load. Simply put, high extraneous cognitive load can be produced as a result of using sub-optimal information presentation and communication. Overall, exogenous factors can stem from the added complexity of human–robot operations in terms of distinct command-and-control systems that emerge from using novel technology. Within such operations, it is particularly important to control sources of extraneous cognitive load that have been shown to produce two distinct negative effects on information processing – redundancy of information and split-attention. These have been shown to attenuate processing capacity thereby minimizing optimal information processing (e.g., Sweller, 1994; Mayer, 1999).

Computer simulation is a test-bed for research that began in the 1970s, grew tremendously in popularity in the 1990s, and has since continued to mature in complexity and realism. When computer simulations were in their infancy, their biggest advantage was the ability to have complete control over the environment in which the simulation took place. Missions could be changed from daylight to twilight with a few keystrokes. Weather conditions could be altered or inserted based on the needs of the experiment. Perhaps most importantly, the landmasses in which the simulations took place were boundless, in their cyber world.

SA has been described as “generating purposeful behavior.” that is, behavior that is directed toward a task goal (Smith & Hancock, 1995). It involves being aware of what is happening around you and understanding what occurring events mean with respect to your current and future goals. Endsley (1995) has formally defined SA as the “perception of elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future” (Endsley, 1995, p. 36). SA has been hypothesized as being critical to operator task performance in complex and dynamic operations (Salas, Prince, Barker, & Shrestha, 1995), like tasking and controlling remotely operated systems. Operators in remote control of ground vehicles need to be aware of where the vehicle is, what the vehicle is doing, and how activities as part of the overall task lead to accomplishment of mission goals. They must also consider the health of the overall system and how the environment affects vehicle status and the ability to complete tasks. In studying robot control in simulated USAR operations, Drury, Scholtz, and Yanco (2003) observed that most of the problems encountered when navigating robots resulted from the human's lack of awareness of these elements.

Imagine trying to navigate through your environment while looking straight ahead through a narrow tube. These are essentially the conditions the operator of an ROV in teleoperation mode may have to contend with. We hypothesized that controlling ground-based ROVs would be easier if an operator developed an explicit overview of the space in which the ROV was maneuvering. Accordingly, we conducted a study in which we had naive undergraduate participants explore a maze-like virtual desert environment while drawing a map of the area. After completing a 30-min mapping task, participants re-entered the maze to search for and retrieve a target object. The virtual robots and landscape, which had minimal landmarks and a maze of navigable paths (see Fig. 1), were created using CeeBot (see Chadwick, Gillan, Simon, & Pazuchanics, 2004, for a discussion of the CeeBot tool). Maps drawn by participants were rated independently by three raters (graduate psychology students) for the usefulness of the map for navigating the area on a scale from 1 (not at all useful) to 7 (extremely useful). Cronbach's alpha was computed as a consistency estimate of inter-rater reliability at 0.96.

DOI
10.1016/S1479-3601(2006)7
Publication date
Book series
Advances in Human Performance and Cognitive Engineering Research
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-0-76231-247-4
eISBN
978-1-84950-370-9
Book series ISSN
1479-3601