Small arms combat modeling: a superior way to evaluate marksmanship data

Adam Biggs (U.S. Navy, San Diego, California, USA)
Greg Huffman (Warfighter Performance Department, Naval Health Research Center, San Diego, California, USA) (Leidos, Inc., San Diego, California, USA)
Joseph Hamilton (Warfighter Performance Department, Naval Health Research Center, San Diego, California, USA) (Hamilton Strategic Solutions, Midlothian, Virginia, USA)
Ken Javes (Warfighter Performance Department, Naval Health Research Center, San Diego, California, USA) (Innovative Employee Solutions, San Diego, California, USA)
Jacob Brookfield (Warfighter Performance Department, Naval Health Research Center, San Diego, California, USA) (Leidos, Inc., San Diego, California, USA)
Anthony Viggiani (USMC Training and Education Command, Quantico, Virginia, USA)
John Costa (Marksmanship Program Management Section, Weapons Training Battalion, Quantico, Virginia, USA)
Rachel R. Markwald (Warfighter Performance Department, Naval Health Research Center, San Diego, California, USA)

Journal of Defense Analytics and Logistics

ISSN: 2399-6439

Article publication date: 18 May 2023

Issue publication date: 19 September 2023

874

Abstract

Purpose

Marksmanship data is a staple of military and law enforcement evaluations. This ubiquitous nature creates a critical need to use all relevant information and to convey outcomes in a meaningful way for the end users. The purpose of this study is to demonstrate how simple simulation techniques can improve interpretations of marksmanship data.

Design/methodology/approach

This study uses three simulations to demonstrate the advantages of small arms combat modeling, including (1) the benefits of incorporating a Markov Chain into Monte Carlo shooting simulations; (2) how small arms combat modeling is superior to point-based evaluations; and (3) why continuous-time chains better capture performance than discrete-time chains.

Findings

The proposed method reduces ambiguity in low-accuracy scenarios while also incorporating a more holistic view of performance as outcomes simultaneously incorporate speed and accuracy rather than holding one constant.

Practical implications

This process determines the probability of winning an engagement against a given opponent while circumventing arbitrary discussions of speed and accuracy trade-offs. Someone wins 70% of combat engagements against a given opponent rather than scoring 15 more points. Moreover, risk exposure is quantified by determining the likely casualties suffered to achieve victory. This combination makes the practical consequences of human performance differences tangible to the end users. Taken together, this approach advances the operations research analyses of squad-level combat engagements.

Originality/value

For more than a century, marksmanship evaluations have used point-based systems to classify shooters. However, these scoring methods were developed for competitive integrity rather than lethality as points do not adequately capture combat capabilities. The proposed method thus represents a major shift in the marksmanship scoring paradigm.

Keywords

Citation

Biggs, A., Huffman, G., Hamilton, J., Javes, K., Brookfield, J., Viggiani, A., Costa, J. and Markwald, R.R. (2023), "Small arms combat modeling: a superior way to evaluate marksmanship data", Journal of Defense Analytics and Logistics, Vol. 7 No. 1, pp. 69-87. https://doi.org/10.1108/JDAL-11-2022-0012

Publisher

:

Emerald Publishing Limited

Copyright © 2022, In accordance with section 105 of the US Copyright Act, this work has been produced by a US government employee and shall be considered a public domain work, as copyright protection is not available


1. Introduction

Evaluating performance among military and law enforcement personnel raises many challenging questions. Despite the difficulty of predicting individual preparedness for a lethal force engagement, marksmanship is almost universally found as a core measure across many different defense and security organizations. For both research and practical evaluations, marksmanship has a major advantage in that it can be readily quantified and stands among the most frequently measured performance variables in either military or law enforcement circles. One major disadvantage, however, is that marksmanship assessments tend to invite endless debate. For example, disputes about the relative merit of speed or accuracy on a given drill are common, yet they often cannot be solved by direct comparison. How do you compare a quarter-second time difference to a 10% accuracy difference on the same drill? Which is better? Answering these questions with subject matter expertise has been the basis for building marksmanship evaluations since the inception of firearms.

Tracing the history of marksmanship helps reveal how the system in use today emerged more than 100 years ago. As early as the American Revolution, marksmanship tests evaluated the lethality of the marksman. In one instance, the test involved a company commander sticking a head-sized wooden board on a tree 150 yards away and drawing a nose in the center—anyone who could shoot the nose could join the company (Harrower, 1900). Continental rifles depended upon accuracy rather than speed for lethality, and in turn, their marksmanship tests were designed for accuracy without a speed requirement. These tests continued largely unchanged for the next hundred years as accuracy dominated marksmanship tests. Around the turn of the 20th century, the point-based system emerged thanks in part to a combination of pay bonuses offered to expert marksmen (36 Congressional Record 798, 1903) and the development of the National Board for the Promotion of Rifle Practice (GAO, 2019; Rocketto, 2012). Points became a way of ranking shooters for classification and competition, and yet, the same point-based classification used for mass production in Second World War (FM 23–10, 1943) remains nearly identical to the current marksmanship scoring employed today (United States Navy, 2021). Thus, point-based systems developed largely to support competitive integrity rather than lethality. The problem is that these measurements do not resemble marksmanship in a lethal force scenario, which may be why marksmanship assessments have little predictive validity for real-world shooting engagements (Morrison and Vila, 1998). As such, there remains a significant opportunity to enhance marksmanship evaluations to the practical benefit of military personnel and law enforcement officers.

Here we propose advancing marksmanship evaluations by using small arms combat modeling to create a more holistic representation of human performance. The intent is to translate raw marksmanship variables into the probability of winning a small arms combat engagement against some given opponent. This approach simulates several processes within a lethal force encounter, utilizing speed and accuracy variables in multiple Monte Carlo simulations to sample possible outcomes given the variance in both factors. Because the procedure samples from variance in marksmanship observations, the entire speed/accuracy trade-off debate is circumvented by omitting the need for arbitrary weighting or point assignment—that is, the Monte Carlo technique lets the data decide whether speed or accuracy matters more under those conditions. Points become irrelevant with this method, which would represent the most significant change for military and law enforcement marksmanship evaluations in more than a century. Moreover, risk assessment can be performed by measuring the number of casualties suffered during an engagement with multiple personnel on both sides of the firefight. Taken together, this procedure has immense practical and theoretical value when exploring the safety and efficacy of performance among military and law enforcement personnel.

2. Fundamentals of modeling armed conflict

Wargaming is a function of strategic and tactical planning. As such, in one form or another, the process has existed since the inception of organized conflict. Its massive evolution came in the form of operations research following Second World War (Morse and Kimball, 1951). Operations research represents a scientific and quantifiable method of providing data for military decision-makers regarding numerous operations such as troop movements and supply chain logistics. This then-emergent and now fully functional field turned wargaming from a thought exercise into a computable event series that could maximize efficiency and preparation in military operations.

Modeling the dynamics of armed conflict or warfare involves a wide array of factors, but the art of this modeling is identifying only a select few variables that make a model manageable, meaningful and useful (Kress, 2012). This guidance suggests that selecting the right variables for the right model is critical, and there are many different options to consider. For example, Lanchester models utilize differential equations to model combat between two opponents as a function of mutual attrition (Atkinson et al., 2012; Kress et al., 2018; Lanchester, 1916). Extensions of this basic process have been applied to guerilla warfare as well as state warfare (Deitchman, 1962; Kress and Szechtman, 2009). This modeling approach addresses large-scale conflict, but not individual service member decision-making. The latter cannot be ignored when exploring the lethality of a given force. However, addressing individual combatant decisions requires a significant shift in method and focus as well as the variables involved in the modeling effort. Perhaps the most common warfighter decision-making model is Boyd's Observe-Orient-Decide-Act (OODA) Loop (Osinga, 2007). In this model, combat decisions follow this loop over and over again as new observations lead to new actions. An individual warfighter gains an advantage by executing the OODA loop faster than the adversary, making time a critical component of the model (Breton and Rousseau, 2005). Like combat shooting models (Washburn and Kress, 2009), the individual decision-making process is an abstraction. It might address the conceptual decision posed to the individual service member, but it lacks the formulaic precision of large-scale modeling efforts such as Lanchester models.

Despite the valuable insight provided by these various efforts, none address individual warfighter lethality. Lanchester formulas apply much better to large-scale conflict because they operate through attrition, and Boyd's OODA loop is a conceptual model without a corresponding process to quantify the outcomes. A training instructor cannot use either method to calculate individual warfighter lethality in different scenarios. Likewise, researchers cannot use these methods to explore the impact of different training regimens or equipment. Some more complex simulations exist, such as One Semi-Automated Forces (Logsdon et al., 2008), the Virtual Battle Space (Buttcher et al., 2022) and Infantry Warrior Simulation (IWARS; Kalnins et al., 2014; Samaloty et al., 2007). These platforms enable concrete manipulation of many different variables including terrain and maneuver warfare. Still, marksmanship can be underrepresented in these simulations. IWARS, as one example, uses a body part hit distribution to simulate the impact of different shots fired to account for casualties and incapacitation (Eaton et al., 2014). This data becomes essential to simulation outcomes, yet broad distributions do not take into account human performance differences in marksmanship skill nor adequately provide a way to evaluate marksmanship data as obtained from the range—one of the most robust sources of information in military assessments.

Instead, marksmanship performance measurements become limited to what can be collected and evaluated on the range itself. This effort often needlessly restricts scenarios to very narrow metrics. For example, marksmanship tests could use a par time, where the drill measures how many shots can be put on target in a given time frame. This drill may be useful as a training exercise, but the only conclusion for lethality would be that five shots are more lethal than four shots. Alternatively, fixed round counts may prevent stratification between shooters because the test only says that they meet some standard without providing a means to differentiate between them. Critical information is missing. Questions begin with—but are not limited to—who achieved the first vital hit? Because speed is held constant, there is no information about when shots were fired. Does a “hit” actually depict a shot to vital area or potentially lethal outcome? Hitting anywhere on the target might not adequately represent a lethal or incapacitating wound, and so the definition of a hit matters as much as the target type. Similar assumptions are imbedded in this outcome such that five shots cannot truly be considered more lethal than four shots without also considering the assumptions inherent to this measure. Therefore, developing a better way to measure individual warfighter lethality utilizing human performance data is a critical need for ongoing military performance evaluations.

3. Monte Carlo simulations and warfighter lethality

Monte Carlo simulations represent another method for simulating a combat engagement, although the technique is underutilized. Common applications involve the effectiveness of weapon strikes (Chusilp et al., 2014; Hu and Wang, 2013) or even marksmanship with small arms (Mihaylov, 2017). Some applications emerged during Vietnam War-era modeling to describe both small unit, tactical-level activities as well as the larger unit and operational-level activities (Adams et al., 1961; Bonder, 2002; De Laquil, 1980; Monahan and DuBois, 1979). However, these simulations were impractical given the computing power at that time. Even a moderate number of simulations could take hours or days to complete, and the Monte Carlo technique requires an increasing number of simulations (corresponding to the variables being simulated) to ensure an accurate interpretation. This computational power concern is much less of an issue in the modern era as even modest computers can perform highly complex calculations that were beyond the reach of the computers at that time.

Currently, the Monte Carlo technique enables converting raw human performance metrics into a small arms lethality simulation that is both simple to understand and easy to calculate (Biggs and Hirsch, 2021). This approach uses marksmanship data collected from two shooters to determine which shooter is more lethal. Marksmanship data must include means and standard deviations regarding speed and accuracy for the technique to work, or otherwise have some reason to sample from a known distribution of shooter performance. Accuracy metrics become interpreted as lethal or non-lethal hits, which requires a conversion based on whether the targets involved point-based scoring (as might be measured with paper targets) or hit-and-miss scoring (as might be conducted with steel targets). Individual shots can then be assigned as a lethal or non-lethal outcome. Speed enhances these measures because two shooters might both fire a lethal or incapacitating round during the simulation, and when this happens, the faster shooter will emerge victorious.

Accuracy and speed thus become converted from raw performance metrics into a likelihood of winning the gunfight based on who fired a lethal round first. An individual simulation can produce one of four possible results between two shooters, Shooter A and Shooter B:

  1. Shooter A wins by firing a lethal shot when Shooter B misses.

  2. Shooter B wins by firing a lethal shot when Shooter A misses.

  3. Non-lethal draw where both shooters miss.

  4. Lethal draw, where both shooters fire a lethal round in such close temporal proximity that the two bullets would pass in the air and strike the opponent.

The latter possibility places specific assumptions on the scenario if distances are far and environmental factors affecting bullet trajectory become involved. Still, when sampled many thousands of times in a Monte Carlo simulation, raw marksmanship performance metrics of speed and accuracy become converted into a percentage chance of winning the gunfight against a given opponent by measuring the number of simulations resulting in each of the four possible outcomes. A quarter-second time difference and a 10% accuracy difference can then be compared directly to determine which performance is more likely to achieve victory in a head-to-head fight under given circumstances. In turn, performance differences can be presented in a simple and straightforward manner that a military audience can readily appreciate without needing to arbitrarily construct a point-based system or intentionally modifying drills to hold either speed or accuracy constant. That is, rather than providing results based on statistics such as p-values, confidence intervals and effect sizes, which are commonly misunderstood by individuals with (and without) significant experience in statistics (Falk and Greenbaum, 1995; Hoekstra et al., 2014; Oaks, 1986; Simpson, 2020), the result can be presented in terms of engagement outcomes—which squad is more lethal?

It is also worth noting that the technique can be adapted to a wide range of scenarios based on what should constitute a “vital hit” from accuracy data. The four outcomes represent possibilities, but additional possibilities could also be included in the simulation. For example, an incapacitating shot might neutralize someone without killing them, which would be simulated as wounded in action rather than killed in action. Additional inclusions could represent factors such as body armor or human physiological limitations. The four simplified possibilities depict an adequate range of outcomes for illustrating how the Monte Carlo technique can be adapted to warfighter lethality evaluations—not an exhaustive list. The specific level of complexity is more a function of the marksmanship data available and question to be answered than any limitation of the Monte Carlo technique.

4. How a Markov Chain enhances the Monte Carlo simulation

Simple Monte Carlo applications can model warfighter lethality by comparing the performance of two different shooters (cf. Biggs and Hirsch, 2021). However, the technique proposed is too simple to adequately measure lethality. The Monte Carlo only approach simulates two shooters firing a single shot in a head-to-head engagement. If they both miss, the engagement ends in a non-lethal draw. Low accuracy can also produce near-meaningless simulation outcomes with a high percentage of non-lethal draws. Moreover, combat engagements incorporate many more actions than this single event. Multiple shots and multiple personnel should be included in the simulation to make the outcome interpretation more realistic.

A proposed solution is to introduce a Markov Chain and modify the technique. This approach is similar to, but distinct from, the Markov Chain Monte Carlo (Brooks et al., 2011; Geyer, 1992; Gilks et al., 1995; Hastings, 1970). Markov Chains are stochastic models that enable simulating a sequence of events with the current state dependent upon the previous state and probabilistic transition to the next state (Gagniuc, 2017; Roberts, 1996). Board games played with dice are excellent examples of Markov Chains as the current state depends on a sequence of previous events, yet the next move depends upon probability as determined by the dice. Markov Chains show the transition of states based on an assigned rule and a sample from a probability distribution. Wrapping this process in a Monte Carlo simulation allows for the entire Markov Chain to be run many times, producing an approximation for the distribution of squad-level outcomes while using a sequence of real-world observations in human performance data rather than arbitrarily assigned transition probabilities. Markov Chains create the sequence, Monte Carlo simulations approximate the likelihood of outcomes, and human performance data form the base observations for Monte Carlo simulations.

Technically, this proposed alternative bears greater resemblance to a process known as a Markov duel (Barfoot, 1974, 1989) or just a stochastic process. The Markov Chain Monte Carlo method describes a class of algorithms used to sample from a probability distribution which is difficult to specify directly. Instead, small arms combat modeling involves a series of Monte Carlo simulations to enable a more complex series of actions than a simple head-to-head engagement. The added value is to simulate a series of events using probabilistic sampling and transition states to determine a more complex outcome than a single shot. Multiple shots can be modeled, and multiple shooters can be involved in the Markov Chain.

When simulating multiple shots, each individual shot will sample from a speed and accuracy distribution. Accuracy will determine the effectiveness of given shots, but speed will act as the tiebreaker if both shooters fire a lethal shot. Speed will also determine how quickly a shooter moves into the next shot, which could have implications for time accrued in the sequence. One shooter might have an advantage of firing extra shots and achieving a vital hit before a more accurate but slower opponent if the time sequence becomes long enough due to low accuracy. The process would continue until a lethal resolution or all ammunition is expended, and in the latter case, there should be a termination rule to determine whether the scenario ends or continues as reloading would then need to be simulated. Multiple shooters proceed in much the same fashion, albeit the contest is no longer head-to-head. Moreover, the Markov Chain solution adds a more granular measure of risk estimation. The probability of victory can be quantified at the squad-level as with the individual level, but as a squad-level analysis, casualties suffered can be determined along with casualties inflicted during the engagement. This measurement captures not only the likelihood of victory, but the likely costs required to achieve said victory.

One critical aspect of the Markov Chain is whether to use a discrete-time chain or a continuous-time chain (Coolen-Schrijner and Van Doorn, 2002; Craig and Sendi, 2002; Spedicato, 2017; Suchard et al., 2001). Discrete-time chains utilize a pre-determined outcome point to simulate transition in incremental steps, whereas continuous-time chains do not divide transition stages into discrete points as the entire process remains a continuous flow. The discrete process has advantages in certain contexts, such as setting a sequential point between transitions in disease modeling (Morton and Finkenstädt, 2005). Until recently, given the logistical challenges of collecting acoustic shot timing data from a large number of shooters, it is conceivable that many marksmanship tests and existing data sets would consist of accuracy metrics only. In this case, combat modeling could utilize a discrete process when speed information is not available. Individual shots would then proceed in a tournament-style approach, where a lethal outcome is checked at each step and the process continues with multiple shots until a lethal outcome is reached. That said, even if the discrete-time process is possible, it would be theoretically inferior to the continuous-time process as speed is a critical component of lethality. Speed is the great tiebreaker—that is, lethal shot accuracy is irrelevant if the enemy shooter fired a lethal shot faster. Continuous-time chains integrate the speed component and produce a theoretically and practically superior modeling method.

5. Examples of the small arms combat modeling in warfighter simulations

The next steps are to provide three examples of the proposed modeling technique and how it can be applied to warfighter lethality. These illustrations should provide depth and context that enables operations research professionals to utilize the technique when communicating to a military audience. In these examples, the basic scenario will be two squads engaging in a firefight at an approximate distance of 100 meters.

Basic assumptions will first be addressed through a process analysis of the engagement. For Monte Carlo simulations of warfighter lethality, the many assumptions can be broadly classified into three possible areas: 1) determination of a lethal outcome; 2) scenario parameters; and 3) data granularity. Next, three different simulations will be conducted based on different factors that would affect the modeling technique. The first simulation will demonstrate the advantage of incorporating a Markov Chain versus a straightforward Monte Carlo simulation when evaluating warfighter lethality. The second simulation will demonstrate the advantage of a small arms combat modeling versus a point-based or percentile-based system in evaluating warfighter lethality. The third simulation will validate why a continuous-time chain is superior by demonstrating the number of misleading resolutions in a discrete-time chain when accuracy is even moderately high.

5.1 Assumptions in the shooting simulation

The foremost consideration is how a lethal outcome will be determined in the simulation. Marksmanship accuracy can be collected in many different ways including hit-or-miss assessments using steel targets, lethal hit zones on photorealistic targets and point-based scoring on bullseye targets. Hit-or-miss approaches can be modeled as a biased coin-flip when determining accuracy, but point-based scoring requires some necessary conversion to identify a lethal hit zone (Biggs and Hirsch, 2021). Determining a lethal outcome is a critical assumption of any modeling effort and should be documented well.

Next, the scenario parameters must be established. These parameters will include factors such as the number of shooters involved, distance between personnel, whether the shooters are moving, weapons used, ammunition available before reloading and many more aspects of combat performance which could be included. Complexity is not a weakness of the modeling effort as any depth of complexity could be simulated. More complex simulations merely complicate the number of possible transition states, programming needed and computational time. Even so, the modeling effort can only be as complex as the parameters incorporated, which is why the scenario itself represents an assumption.

The final category of assumptions involves the granularity of input data in the modeling effort. If marksmanship data is entered into the simulation, then how the marksmanship data is collected matters. Single-shot drills could provide variance estimates, but only at the group level as each individual only fired a single shot so there was no performance variability to estimate. A drill may collect speed and accuracy data about the general exercise, but the data collection may not distinguish between differences for a first shot versus an inter-shot interval. Speed data may aggregate all processes leading up to the trigger press, but the different exercises may not permit segmenting the data into different cognitive functions such as visual search behaviors versus distance calculations. Data granularity broadly describes the precision and detail with which data is collected for simulation input, and how the data is collected inherently limits the modeling effort. The best result occurs when the raw data collection is preceded by a systems analysis so that the drills measured adequately sample the combat marksmanship they are intended to replicate.

For the three modeling examples, the specific scenario will be a combat engagement at 100 meters between opponents firing rifles. Marksmanship data will be simulated from assumptions that parallel typical marksmanship tests. Specifically, the simulated drill has shooters fire upon a steel (i.e. hit or miss) target 100 meters from the shooter's position until scoring a hit. Data collected includes time to first shot, inter-shot interval (time in between shots following the initial shot) and hit probability. The speed data permits modeling efforts that distinguish between behaviors associated with the first shot and behaviors associated with subsequent shots. This information is important when modeling multiple shooters as it will provide an option for modeling transitions between targets using first-shot data. Accuracy can be calculated based on hit probability which translates to the number of shots fired before hitting the target. Again, the assumed target is a steel target at 100 meters, which permits only hit-or-miss data. These outcomes produce a biased coin flip for accuracy when simulating whether the individual shot successfully struck the target.

Transition states are determined by the accuracy of the shot fired, and this individual process assumes that the shooter is not eliminated from a squad-level simulation by a hostile shooter. The left half of the figure shows how the first shot produces an opportunity to sample speed and accuracy from a distribution of performance (A), how subsequent shots can be distinguished from the cognitive and behavioral factors involved in the first shot (B), how weapon malfunctions provide a further opportunity to model behavior (C) and when a successful outcome produces a transition to a new target. The right half of the figure shows Shooter 1 and Shooter 2 firing upon one another in a continuous-time Markov Chain that produces multiple shots from each shooter before one successfully incapacitates the other at the end of the string. This flow is dependent upon the presumed data available in the marksmanship drill as originally collected.

Both accuracy and speed are used to influence transition states. Accuracy determines transitions based on hit-or-miss to control the individual shot outcome and how speed will be calculated. Speed determines the time for each individual shot, but speed criteria also determine when a shooter is eliminated. Any time a hostile opponent fires a lethal shot at the individual (if the opponent's shot is faster than the individual can fire a lethal shot), the individual is removed from the simulation and the hostile selects a new target (see Figure 1). Reloading speed, when applicable, would be sampled from another drill where the shooter is required to reload after hitting the target. Time to reload can be sampled as the time difference between hitting the target and the time to first shot following the reload. The first simulation will utilize a single shooter paradigm, whereas the remaining two simulations will involve squads of 14 service members. These simulations include a latency parameter to allow for overlap between two lethal shots fired in close temporal proximity that they result in a lethal draw, such as two shots within one hundred milliseconds of one another. This data utilizes metrics from multiple sources that measured simple response time (RT) from military service members (Proctor et al., 2015; Vincent et al., 2008, 2012) plus the time taken for a rifle round to travel 100 m. Finally, the termination rule will be total victory—that is, a squad is only victorious when the entire enemy squad has been eliminated.

Notably, the specific simulation can be affected by the marksmanship data collected, and these examples are merely one of many different ways marksmanship data can be collected. For example, the sampled reload drill incorporates both the time to reload and cognitive processes in aiming behaviors to acquire a new target following the reload, but the given data cannot distinguish between them. More complete assumptions could provide a more complete simulation. However, any lethality modeling effort will be limited by the data available, and the intent here is to provide information comparable to realistic marksmanship data drills.

5.2 Example #1: Markov Chains versus standard Monte Carlo in simulating lethality

A critical question involves whether the introduction of a Markov Chain process adds significant value to the Monte Carlo simulation. This question is answered here by using a Markov-Chain-inspired process to simulate multiple shots in sequence rather than a single shot (cf. Biggs and Hirsch, 2021). The multi-shot simulation is intended to clarify the relative performance difference by reducing the number of unresolved outcomes, most notably the number of non-lethal draws where both participants miss. Two shooters missing each other must be incorporated into the modeling as contemporary field firearms performance among law enforcement suggests hit rates are well below 50% on average (14–38%; Morrison and Vila, 1998). Not only are missed shots more likely to be accompanied by “follow-up” shots but given the terminal ballistic realities of individual small arms weapons (influenced by shot placement, projectile, and distance to the target, etc.), it is reasonable to assume that multiple shot engagements are the norm rather than the exception. Therefore, if the introduction of a sequence of events simulating multiple shots both reduces ambiguity in the outcomes, the simulation outcomes should produce a more informed ratio of Shooter A wins to Shooter B wins that better resembles a combat engagement.

During these example simulations, Shooter B will have a mean performance advantage of 10% for both accuracy and speed. The standard deviations in all cases will be one-quarter of the mean value. Shot times differentiate between the first shot and subsequent shots, where the average speed for subsequent shots is one-quarter the speed of the first shot to simulate removing cognitive processes. The first shot base speed is 15.00 s for Shooter A (SD = 3.75 s) and 13.50 s for Shooter B (SD = 3.38 s). Subsequent shot base speed is 3.75 s for Shooter A (SD = 0.94 s) and 3.38 s for Shooter B (SD = 0.84 s). Shot accuracy is simulated to be the same for first shots and subsequent shots. Three different shot accuracy scores are simulated for Shooter A: 25%, 50% and 75%. Shooter B had a 10% accuracy advantage in all cases (27.5%, 55% and 82.5%). One hundred thousand simulations were conducted for each condition, and percentage outcomes represent the number of simulations from these 100,000 that produced this result. See Figure 2.

Far left numbers represent the simulated base accuracy for Shooter A in a scenario where Shooter B has a 10% advantage in both speed and accuracy. The entire band represents the continuum of possible outcomes totaling to 100%. Blue denotes Shooter A wins outright, grey denotes a non-lethal draw where no shooter fired a lethal round, black denotes a lethal draw where both shooters fired lethal rounds and gold denotes Shooter B wins outright. The graphical advantage of the Markov Chain Monte Carlo is demonstrated by reducing the grey bands in each accuracy simulation.

Across all three accuracy rates, the Markov Chain produced fewer non-lethal draws than the straightforward single-shot Monte Carlo. This understandable difference highlights not only that simulating a sequence of events is important, but when the difference is most important. For high-accuracy scenarios, the chain simulation may not be as important given that the first shot will likely hit. This specific instance is effectively modeling only speed with accuracy having a minimal influence. The assumption then becomes whether a single shot is sufficient to neutralize an opponent, which involves the assumptions underlying the definition of a lethal outcome rather than individual accuracy. Conversely, in low-accuracy scenarios, a single-shot procedure is likely to produce an exceptionally high number of unresolved outcomes or non-lethal draws. This approach does provide some insight into lethality, but as a combat engagement would never end after a single missed shot from both sides, the simulation is hardly realistic. A multi-shot engagement provides better insight by resolving performance differences in low-accuracy simulations.

5.3 Example #2: small arms combat modeling versus point-based lethality estimations

Another advantage of the Markov Chain simulation is that it better represents how combat marksmanship incorporates each of the skills measured in a marksmanship assessment. However, it is important to demonstrate this advantage as well as how a point-based system might produce misleading results. To that end, the second example utilizes the same marksmanship drill to compare a point-based or percentile-based outcome against a small arms combat simulation designed to use an operations research analysis of marksmanship procedures in simulation. See Table 1 for simulated marksmanship data. Equal points weighting is assumed across the four exercises.

Assuming points were also assigned to these outcomes based on the percentile score associated with performance, both Squad A and Squad B would be considered in the 50th percentile. The lethality interpretation should then be that both squads are equally lethal. By comparison, small arms combat modeling might produce a different outcome because it is based on the combat marksmanship process rather than arbitrarily weighting the outcomes of a marksmanship table. Additionally, this second example will illustrate the advantage of a squad-level simulation through both casualty estimation and the compounding squad-level advantage that results from an individual shooter advantage.

According to point-based/percentile rankings, these two squads were equivalent. The simulation results, however, indicate a two-to-one advantage for Squad B. The disparity between Squads A and B in the simulation resulted from Squad B having an advantage in skills that more directly impact outcomes in the simulated conflict. Specifically, while Squad A excelled at reload speeds, Squad B had an advantage in First Shot RT, Accuracy and Inter-shot Interval—all of which have a more direct impact on outcomes. This is especially true since the longest simulation lasted 27 rounds (see Figure 3). Assuming a 30-round magazine, reload times would not be a relevant factor in these simulated conflicts. In addition to achieving total victory more often than Squad A, Squad B also took fewer casualties (6.99 compared to 8.11) on average to do so. Small arms combat modeling thus provided more granular and meaningful analyses than the point-based system while better-representing combat marksmanship processes and producing a measure of risk exposure through estimated casualties.

The simulation demonstrates a clear advantage of the small arms combat modeling technique over a point-based or percentile-based system. First, the simulation better represents combat marksmanship by integrating an operations research analysis into the evaluation rather than an arbitrary point-based or percentile-based system. Factors such as reloading are clearly not irrelevant, but they may play a more marginalized role in determining warfighter lethality that does not warrant equal weighting with another factor such as accuracy. Second, the squad-level analysis simulates the compounding advantage determined by an individual shooter. Example 1 demonstrated how a single shooter advantage could be established, but when simulated in a squad, the superior force can press the advantage as it will continue to grow throughout an engagement as hostiles are eliminated at a faster rate than allies. Third, this approach provides an evaluation of risk exposure. Casualties can be estimated alongside the likelihood of achieving victory, which better depicts lethality by estimating the costs of achieving a victory as well as the likelihood of emerging victorious. These combined advantages make small arms combat modeling a superior assessment of warfighter lethality compared to an arbitrarily-weighted, point-based system.

5.4 Example #3: continuous-time versus discrete-time lethality simulations

A final consideration is why a continuous-time Markov Chain is important to use rather than a discrete-time chain. The continuous-time chain enables a more complex transition state between outcomes by incorporating speed data. Especially when multiple shots and multiple shooters are involved, the continuous-time chain better depicts the fluidity of a combat engagement. However, it is possible that only accuracy information might be available following a marksmanship exercise. This approach would represent a discrete-time Markov Chain as each transition state is dependent upon the individual shot serving as the discrete event rather than a continuous-time process. The third example will demonstrate the value of a continuous-time chain compared to a discrete-time chain in demonstrating lethality.

Participants are engaging based on the same marksmanship data of Example 1, limited only to the 50% base accuracy condition. The discrete-time analysis uses only accuracy, and so there is no speed to act as a tiebreaker in the event a shot is sampled as lethal from both shooters. The result would be that they eliminate each other while the rest of the engagement continues. For the discrete-time chain, casualties were unusually high. More than 11 of the 14 personnel would be killed to achieve a victory compared to the lesser casualties estimated by the continuous-time chain. The likelihood of victory is also misleading as the discrete-time chain reduces the chance of victory—due in part to the possibility of all personnel being killed on both sides of the engagement—while also inflating casualties. See Table 2.

The third simulation demonstrates the value of a continuous-time Markov Chain when simulating warfighter lethality. Specifically, speed and accuracy data present a more decisive explanation compared to discrete-time data with accuracy alone, which produces a high number of lethal draws. Individual head-to-head engagements will often end with both shooters killing the other if base accuracy is high and accuracy is the only determining variable. In practice, this factor becomes expressed as an overestimation of casualties suffered to achieve victory and an underestimation of the chances of victory by the winning squad. Whereas the discrete-time chain measures many outcomes as a lethal draw when determining based on accuracy alone, the continuous-time chain can use speed to resolve the outcome with the faster shooter being deemed the lethal one. It is still possible to have a lethal draw, where both shooters fire lethal rounds in such temporal proximity that they would kill one another, but this outcome is much rarer when speed is measured alongside accuracy. Once again in simulating warfighter performance, the speed-plus-accuracy approach excels over an accuracy-only approach (cf. Biggs and Hirsch, 2021). Accuracy-only might be used out of necessity, but analysts should strive to include speed whenever possible while modeling warfighter lethality.

6. Discussion

Military performance encompasses a wide range of activities, but an important subset of performance assessment concerns warfighter lethality. That is, identifying how successful a given force will be in combat against a hostile adversary. While marksmanship data is one of many variables that could be utilized as a surrogate for lethality, it is often done so in a way that is at best imprecise, and at worst, not combat-relevant. The current analysis proposes an alternative approach to evaluating warfighter lethality by using small arms combat modeling to convert human performance data into a quantifiable chance of winning a gunfight. The process embraces lethality because it is inherently adversarial. Risk exposure is also quantified in this process by identifying the likely number of casualties suffered to achieve victory. Moreover, this proposed approach circumvents arbitrary weighting of different speed and accuracy drills as well as the entire discussion of speed/accuracy trade-offs by incorporating both factors into a simulation. These combined factors make small arms combat modeling a unique method to convey the relative impact on warfighter performance of different training procedures, equipment or other factors. In doing so, this simulation approach represents the most significant shift in military marksmanship evaluations since before Second World War.

Three exercises demonstrated the value of small arms combat modeling in simulating warfighter lethality. The first exercise demonstrated the value of integrating a Markov Chain rather than using a simple head-to-head Monte Carlo simulation (cf. Biggs and Hirsch, 2021). A Markov Chain enables simulating multiple shots, which produces much more decisive resolutions as there are far fewer non-lethal draws, especially when accuracy is low. The second exercise demonstrated how the simulation is superior to a points-based or percentile-based system. Two squads with the same percentile ranking on average yielded drastically different outcomes in simulation—specifically, one squad had a two-to-one advantage in likelihood of achieving victory despite their similar points-based ranking. This evidence also demonstrates the need for more operations research analysis in marksmanship as a means of supporting better warfighter lethality evaluations. The third simulation demonstrated the value of a continuous-time Markov Chain rather than a discrete-time Markov Chain. Discrete-time chains may be necessary when accuracy information is the only marksmanship data available, but they have significant disadvantages when compared with continuous time chains as they overestimate a number of casualties and underestimate advantage for the winning squad. Continuous-time chains better depict the fluid nature of combat and the importance of speed when providing decisive outcomes in evaluating lethality. Each exercise points to the multiple advantages of using small arms combat modeling as an enhanced method of converting marksmanship data into warfighter lethality evaluations.

Overall, the Monte Carlo simulation approach also better represents lethality than typical marksmanship assessments. Whereas many tests rely on points-based systems or must arbitrarily hold either speed or accuracy constant to assess the other component (e.g. par time drills, shooting exercises with no time constraint), this simulation-based approach incorporates both speed and accuracy to give a more comprehensive view of lethality. Moreover, the sampling technique inherently incorporates variance into the outcome. Consistency of performance matters—particularly at scale and over time—and no shooter will win 100% of the time in a combat engagement. Incorporating variance and expressing lethality as a chance of victory embraces this concept and emphasizes that variance is a critical measure of performance. Additionally, the model produces a measure of risk assessment through casualty estimations. These various factors provide more context and depth when evaluating warfighter lethality than marksmanship scores alone could provide.

The applications also extend into multiple uses when evaluating performance, especially in areas of training and equipment. If two novel training initiatives are conducted under similar conditions, their respective effectiveness can be compared by modeling what a combat engagement between these opposing forces would look like. Rather than the arbitrary debate about the relative merits of each course, the outcome can be quantified into the chance of victory with the superior course producing the more lethal warfighters. Another use would be in equipment evaluations. Performance can be measured with different rifles or different equipment and the data can be used to model lethality with the different equipment options. This possibility further demonstrates the value of quantifying warfighter lethality. That is, a given trigger may enable shooters to fire more rapidly or better night optical devices might enable faster first hit times. The performance benefit can be quantified as some increased percentage chance of winning the engagement, which can be directly contrasted with the equipment cost. Put simply, does the command believe a 5% increased chance of victory is worth the $5,000 investment, or would they rather invest $20,000 for new equipment that gives a 10% increased chance of victory? These performance benefits can be placed into the context of lethality when briefed to senior leadership to aid decision-making. A third option involves setting a standard for performance by having a population measure. If you set the expectation of a near-peer adversary as the competing group, then the percentage of engagements won represents the individual or squad score. That is, if the individual wins 94% of their engagements in simulation, then the individual would be in the 94th percentile of shooters. An organization can set a reference population as the expectation and compare the individual shooters or squads based on the number of engagements they would win to determine a score rather than developing an arbitrarily-weighted points system.

Although there are clear advantages to utilizing a Monte Carlo simulation when evaluating warfighter lethality, these advantages should be viewed in light of their potential drawbacks. The primary drawback is the requirement of a comparison group. This lethality simulation can be biased by selecting a weaker opponent, thereby making a force seem more lethal than they actually are. However, lethality is inherently a confrontational proposition that requires an adversary—a shooter cannot reasonably be measured for lethality in contrast to a target which does not fire back. Choosing the right opponent is only a potential weakness and not a guaranteed failing. Another issue involves the marksmanship data used for simulation. The intent is to collect individual marksmanship data that can be used for squad-level lethality simulations, but the simulation can only be as precise as the data entered into it. Marksmanship drills should be evaluated alongside an operations research analysis of the marksmanship process so that optimal exercises can be designed for assessments and entered into models. Marksmanship operations research and systems analyses are sorely lacking from the respective fields currently, which focus more on large-scale conflict than individual warfighter lethality (cf. Lanchester models; Atkinson et al., 2012; Kress et al., 2018; Lanchester, 1916). In turn, individual modeling efforts must evaluate the available drills and how the data might be entered into models. Fortunately, this limitation also represents an opportunity that can be resolved with future operations research.

Of course, one notable limitation involves the nature of target range accuracy and combat accuracy. Marksmanship ranges often limit the many variables for numerous reasons, including controlled assessment and safety. Actual combat will engage a number of different factors not represented in the marksmanship range (Grossman, 2009; Grossman and Christensen, 2004). This limitation applies to any marksmanship range or assessment. Still, the goal of the present work involved extracting more meaningful conclusions from marksmanship data.

7. Future directions

The current evidence supports small arms combat modeling as a way to extract more meaningful information from marksmanship data, but it also represents a simplistic application. This Markov Chain model is relatively easy to create with fast run times to obtain results, both of which are admirable features. However, there are many future directions that could substantially enhance the basic technique to supplement more complex small arms combat modeling efforts. The most notable change should be to adapt marksmanship data into agent-based simulations (Kiesling et al., 2012; Railsback et al., 2006; Siebers et al., 2010; Woodaman, 2000). Rather than allow the outcome to be dependent upon marksmanship data alone, agent-based simulations can incorporate autonomous agents to better understand the complex interactions involved in combat. In turn, the Markov Chain supports a larger simulation with more freedom and autonomy between actors that can incorporate many more variables. Deep learning techniques could further supplement the knowledge gained from a stochastic duel (Gupta et al., 2022).

Still, the question becomes what variables to address and how best to introduce them into the Markov Chain. Again, the current effort largely revolved around extracting more valuable interpretations from marksmanship data, yet it must be noted that the often desired one-shot/one-kill dynamic of marksmanship accuracy is a vaunted goal in combat actions. Most shots are fired without ever striking the target. Instead, the volume of fire could offer a psychological advantage even if only a small percentage of rounds ever hit an effective target (Hall and Ross, 2009). Suppressive fire is likewise another tactic intended to produce a valuable opportunity without ever intending for the rounds fired to strike a target (Teo et al., 2022). Neither volume fire or suppressive fire is represented in the current technique, albeit both demonstrate how the intent of shots fired can involve more than simply striking the target. Another critical factor could involve target acquisition, which can be important in determining the winner of a stochastic duel (Wand et al., 1993). The shooter who acquires the target first has a notable advantage in a combat scenario. This aspect both better resembles actual combat procedures and could further identify related factors that could influence the outcome, such as how attacking from a hidden position achieves a notable advantage in a Markov model (McNaught, 2002). Each variable represents one of many factors that could be introduced into the base Markov Chain technique used here to produce a more effective combat simulation.

8. Conclusion

Ultimately, modeling armed conflicts will continue to be a critical function in operations research with a military focus. Small arms combat modeling provides a method to convert the most readily available surrogate measurements for lethality, marksmanship data, into quantifiable models of winning a firefight. This approach fills a vital need of modeling warfighter lethality at the individual or squad level rather than the large-scale level of full companies or divisions engaging in combat. Squad-level analyses might become particularly relevant in operations research involving counter-terrorism activities by focusing on these smaller engagements with non-state actors (cf. Arney and Arney, 2013; Kaplan and Kress, 2005; Kress and Szechtman, 2009). Still, the core advantage is in how small arms combat modeling quantifies lethality. Any speed/accuracy trade-off or point-based weighting is circumvented as the simulation process inherently utilizes both to provide a quantifiable chance of winning the engagement while simultaneously capturing risk exposure through the expected number of casualties to achieve victory. These simulation methods provide a means to fundamentally shift marksmanship evaluations from the points-based methods used to collect data for nearly a century. Taken together, this work advances the operations research of small arms combat as well as advancing the definition of lethality by demonstrating the need for speed, accuracy and variance in performance analyses.

Figures

Basic flow of the Markov Chain process to convert marksmanship data into lethality metrics

Figure 1

Basic flow of the Markov Chain process to convert marksmanship data into lethality metrics

Graphical depiction of the head-to-head outcomes using either a straightforward Monte Carlo single-shot simulation (Monte Carlo) or a Markov Chain-inspired multi-shot simulation (Markov Chain)

Figure 2

Graphical depiction of the head-to-head outcomes using either a straightforward Monte Carlo single-shot simulation (Monte Carlo) or a Markov Chain-inspired multi-shot simulation (Markov Chain)

Histogram of the number of rounds until totally victory was achieved

Figure 3

Histogram of the number of rounds until totally victory was achieved

Simulated marksmanship data from 100-meter marksmanship drills

Marksmanship Metrics
First-shot RTInter-shot IntervalAccuracyReload Time
Average Sample
Mean14.25s3.56s26.25%13.00s
SD3.56s0.89s6.56%3.25s
Squad A
Mean15.00s3.75s25.00%11.00s
SD3.75s0.94s6.25%2.75s
Percentile42nd42nd42nd73rd
Squad B
Mean13.50s3.38s27.50%15.00s
SD3.38s0.85s6.88%3.75s
Percentile58th58th58th27th
Outcome AnalysisSquad ASquad B
Points-based Results
Simulation Results
Win %
Casualties Sustained
50th percentile

33.00%
8.11
50th percentile

67.00%
6.99

Note(s): Four data points are available from the exercise: first-shot reaction time, inter-shot interval, accuracy and reload time. Means and standard deviations represent performance from two groups, Squad A and Squad B. These performances are compared against another simulated average sample to convert performance in percentile scores. Raw performance is the same as with Example #1 for first-shot RT, inter-shot interval and accuracy. Across all four exercises, both squads have the same average percentile score of 50th percentile

Source(s): Table by authors

Simulated outcomes from a discrete-time chain versus a continuous-time chain Markov Chain Monte Carlo simulation

Outcome
Squad A WinsSquad B WinsLethal Draws
Discrete-time Chain
Likelihood33.26%61.26%5.48%
Casualties Sustained11.2310.55N/A
Continuous-time Chain
Likelihood26.12%73.88%0.00%
Casualties Sustained9.077.42N/A

Note(s): Performance data is based on the 50% accuracy condition and the same 10% advantage to both speed and accuracy outlined in the first simulation

Source(s): Table by authors

Disclaimer: The authors are military service members or employees of the U.S. Government. This work was prepared as part of their official duties. Title 17, U.S.C. §105 provides that copyright protection under this title is not available for any work of the U.S. Government. Title 17, U.S.C. §101 defines a U.S. Government work as work prepared by a military service member or employee of the U.S. Government as part of that person’s official duties. Report No. 22-17 was supported by the Office of Naval Research under work unit no. N2027. The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Department of the Navy, Department of Defense, nor the U.S. Government.

References

36 Congressional Record 798 (1903), available at: https://www.govinfo.gov/content/pkg/GPO-CRECB-1903-pt1-v36/pdf/GPO-CRECB-1903-pt1-v36-26-2.pdf (accessed 19 May2022).

Adams, H.E., Forrester, R.E., Kraft, J.F. and Oosterhout, B.B. (1961), CARMONETTE: A Computer-Played Combat Simulation, Technical Memorandum ORO-T-389, Operations Research Office, Johns Hopkins University, Baltimore, MD.

Arney, D.C. and Arney, K. (2013), “Modeling insurgency, counter-insurgency, and coalition strategies and operations”, The Journal of Defense Modeling and Simulation, Vol. 10 No. 1, pp. 57-73.

Atkinson, M.P., Gutfraind, A. and Kress, M. (2012), “When do armed revolts succeed: lessons from Lanchester theory”, Journal of the Operational Research Society, Vol. 63 No. 10, pp. 1363-1373.

Barfoot, C.B. (1974), “Markov duels”, Operations Research, Vol. 22 No. 2, pp. 318-330.

Barfoot, C.B. (1989), “Continuous‐time Markov duels: theory and application”, Naval Research Logistics (NRL), Vol. 36 No. 3, pp. 243-253.

Biggs, A.T. and Hirsch, D.A. (2021), “Using Monte Carlo simulations to translate military and law enforcement training results to operational metrics”, The Journal of Defense Modeling and Simulation, Vol. 19 No. 3, pp. 403-415.

Bonder, S. (2002), “Army operations research—historical perspectives and lessons learned”, Operations Research, Vol. 50 No. 1, pp. 25-34.

Breton, R. and Rousseau, R. (2005), “The C-OODA: a cognitive version of the OODA loop to represent C2 activities”, Proceedings of the 10th International Command and Control Research Technology Symposium.

Brooks, S., Gelman, A., Jones, G. and Meng, X.L. (2011), (Eds), in, Handbook of Markov Chain Monte Carlo, CRC Press.

Buttcher, D., Dreilich, C., Fleischmann, S., Löffler, T., Luther, S., Spanier, F., Ströbel, M., Diefenbach, T. and Lechner, U. (2022), “Virtual Battlespace 3: scenario analyzing capability and decision support based on data farming”, NATO Public Release, available at: https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2016/MP-SAS-OCS-ORA-2016-22.pdf

Chusilp, P., Charubhun, W. and Koanantachai, P. (2014), “Monte Carlo simulations of weapon effectiveness using Pk matrix and carleton damage function”, International Journal of Applied Physics and Mathematics, Vol. 4 No. 4, p. 280.

Coolen-Schrijner, P. and Van Doorn, E.A. (2002), “The deviation matrix of a continuous-time Markov chain”, Probability in the Engineering and Informational Sciences, Vol. 16 No. 3, pp. 351-366.

Craig, B.A. and Sendi, P.P. (2002), “Estimation of the transition matrix of a discrete‐time Markov chain”, Health Economics, Vol. 11 No. 1, pp. 33-42.

De Laquil, P. (1980), “Sabres II: an individual resolution small arms combat simulation model”, The Commission, Vol. 929.

Deitchman, S.J. (1962), “A Lanchester model of guerrilla warfare”, Operations Research, Vol. 10 No. 6, pp. 818-827.

Eaton, J., Kalnins, R., McKearn, M., Wilson, P. and Zecha, A. (2014), “Verification and validation of the Infantry Warrior Simulation (IWARS) through engagement effectiveness modeling and statistical analysis”, Systems and Information Engineering Design Symposium (SIEDS), pp. 260-265, IEEE.

Falk, R. and Greenbaum, C.W. (1995), “Significance tests die hard: the amazing persistence of a probabilistic misconception”, Theory and Psychology, Vol. 5 No. 1, pp. 75-98.

FM 23-10 US Rifle 30 Caliber (1943), M1903, 30 September 1943.

Gagniuc, P.A. (2017), Markov Chains: From Theory to Implementation and Experimentation, John Wiley & Sons.

Geyer, C.J. (1992), “Practical Markov chain Monte Carlo”, Statistical Science, Vol. 7 No. 4, pp. 473-483.

Gilks, W.R., Richardson, S. and Spiegelhalter, D. (1995), (Eds), in, Markov Chain Monte Carlo in Practice, CRC Press.

Government Accountability Office (2019), “GAO-19-287, civilian marksmanship program: information on the sale of surplus army firearms”, available at: https://www.gao.gov/assets/gao-19-287.pdf on 26 May 2022

Grossman, D. (2009), in Rev (Ed.), On Killing: The Psychological Cost of Learning to Kill in War and Society, Back Bay Books, New York, NY.

Grossman, D. and Christensen, L.W. (2004), On Combat: The Psychology and Physiology of Deadly Conflict in War and Peace, PPCT Research Publications, Belleville, IL.

Gupta, M., Sharma, B., Tripathi, A., Singh, S., Bhola, A., Singh, R. and Dwivedi, A.D. (2022), “n-Player stochastic duel game model with applied deep learning and its modern implications”, Sensors, Vol. 22 No. 6, p. 2422.

Hall, B. and Ross, A. (2009), “Bang on target?: infantry marksmanship and combat effectiveness in Vietnam”, Australian Army Journal, Vol. 6 No. 1, pp. 139-156.

Harrower, J. (1900), “Diary of John harrower, 1773-1776”, The American Historical Review, Vol. 6 No. 1, pp. 65-107.

Hastings, W.K. (1970), “Monte Carlo sampling methods using Markov chains and their applications”, Biometrika, Vol. 57 No. 1, pp. 97-109.

Hoekstra, R., Morey, R.D., Rouder, J.N. and Wagenmakers, E.J. (2014), “Robust misinterpretation of confidence intervals”, Psychonomic Bulletin and Review, Vol. 21 No. 5, pp. 1157-1164.

Hu, X.J. and Wang, H.Y. (2013), “Effectiveness calculation of multiple rounds simultaneous impact shooting method based on Monte Carlo method”, Applied Mechanics and Materials, Vol. 397, pp. 2459-2463, Trans Tech Publications.

Kalnins, R., McKearn, M., Wilson, P., Zecha, A. and Eaton, J. (2014), “An analysis of the Infantry warrior simulation (IWARS) through engagement effectiveness modeling and statistical data collection”, IIE Annual Conference. Proceedings, p. 404, Institute of Industrial and Systems Engineers (IISE).

Kaplan, E.H. and Kress, M. (2005), “Operational effectiveness of suicide-bomber-detector schemes: a best-case analysis”, Proceedings of the National Academy of Sciences, Vol. 102 No. 29, pp. 10399-10404.

Kiesling, E., Günther, M., Stummer, C. and Wakolbinger, L.M. (2012), “Agent-based simulation of innovation diffusion: a review”, Central European Journal of Operations Research, Vol. 20, pp. 183-230.

Kress, M. (2012), “Modeling armed conflicts”, Science, Vol. 336 No. 6083, pp. 865-869.

Kress, M. and Szechtman, R. (2009), “Why defeating insurgencies is hard: the effect of intelligence in counterinsurgency operations—a best-case scenario”, Operations Research, Vol. 57 No. 3, pp. 578-585.

Kress, M., Caulkins, J.P., Feichtinger, G., Grass, D. and Seidl, A. (2018), “Lanchester model for three-way combat”, European Journal of Operational Research, Vol. 264 No. 1, pp. 46-54.

Lanchester, F.W. (1916), Aircraft in Warfare: The Dawn of the Fourth Arm, Constable, London.

Logsdon, J., Nash, D. and Barnes, M. (2008), “OneSAF tutorial”, Army Simulation Training And Instrumentation Command Orlando Fl Program Manager Constructive Simulation.

McNaught, K.R. (2002), “Markovian models of three‐on‐one combat involving a hidden defender”, Naval Research Logistics (NRL), Vol. 49 No. 7, pp. 627-646.

Mihaylov, D.G. (2017), “One simple model of small arms fire using the Monte Carlo method”, The Journal of Defense Modeling and Simulation, Vol. 14 No. 4, pp. 465-470.

Monahan, R.H. and DuBois, E.L. (1979), An Assessment of Available Security System Simulations to Support the TNFS2 Program, Sri International Menlo Park CA.

Morrison, G. and Vila, B. (1998), “Police handgun qualification: practical measure or aimless activity?”, Policing: An International Journal of Police Strategies and Management, Vol. 21, pp. 510-533.

Morse, P. and Kimball, G. (1951), Methods of Operations Research, MIT Technology Press/Wiley, New York.

Morton, A. and Finkenstädt, B.F. (2005), “Discrete time modelling of disease incidence time series by using Markov chain Monte Carlo methods”, Journal of the Royal Statistical Society: Series C (Applied Statistics), Vol. 54 No. 3, pp. 575-594.

Oaks, M. (1986), Statistical Inference: A Commentary for the Social and Behavioral Sciences, Wiley and Sons, Great Britain.

Osinga, F. (2007), Science, Strategy and War: The Strategic Theory of John Boyd, Routledge, Abingdon.

Proctor, S.P., Nieto, K., Heaton, K.J., Dillon, C.C., Schlegel, R.E., Russell, M.L. and Vincent, A.S. (2015), “Neurocognitive performance and prior injury among US Department of Defense military personnel”, Military Medicine, Vol. 180 No. 6, pp. 660-669.

Railsback, S.F., Lytinen, S.L. and Jackson, S.K. (2006), “Agent-based simulation platforms: review and development recommendations”, Simulation, Vol. 82 No. 9, pp. 609-623.

Roberts, G.O. (1996), “Markov chain concepts related to sampling algorithms”, Markov Chain Monte Carlo in Practice, Vol. 57, pp. 45-58.

Rocketto, H. (2012), “A short history of the national trophy individual rifle match”, Civilian Marksmanship Program, available at: http://thecmp.org/wp-content/uploads/NTIRifleHistory.pdf (accessed 26 May 2022).

Samaloty, N.N.E., Schleper, R., Fawkes, M.A. and Muscietta, D. (2007), “Infantry warrior simulation (IWARS): a soldier-centric constructive simulation”, Phalanx, Vol. 40 No. 2, pp. 29-31.

Siebers, P.O., Macal, C.M., Garnett, J., Buxton, D. and Pidd, M. (2010), “Discrete-event simulation is dead, long live agent-based simulation”, Journal of Simulation, Vol. 4 No. 3, pp. 204-210.

Simpson, A. (2020), “On the misinterpretation of effect size”, Educational Studies in Mathematics, Vol. 103 No. 1, pp. 125-133.

Spedicato, G.A. (2017), “Discrete time Markov chains with R”, The R Journal, Vol. 9 No. 2, pp. 84-104.

Suchard, M.A., Weiss, R.E. and Sinsheimer, J.S. (2001), “Bayesian selection of continuous-time Markov chain evolutionary models”, Molecular Biology and Evolution, Vol. 18 No. 6, pp. 1001-1013.

Teo, G., Sikorski, E., Schreck, J. and Goodwin, G. (2022), “Like day and night: comparing squad level communications and shooting performance under differing battle drill conditions”, in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 66 No. 1, pp. 611-615, SAGE Publications, Sage CA: Los Angeles, CA.

United States Navy (2021), “OPNAVINST 3591.1G, Small arms training and qualification”, Published 01 Jun 2021.

Vincent, A.S., Bleiberg, J., Yan, S., Ivins, B., Reeves, D.L., Schwab, K., Gilliland, K., Schlegel, R. and Warden, D. (2008), “Reference data from the automated Neuropsychological Assessment Metrics for use in traumatic brain injury in an active duty military sample”, Military Medicine, Vol. 173 No. 9, pp. 836-852.

Vincent, A.S., Roebuck-Spencer, T., Gilliland, K. and Schlegel, R. (2012), “Automated neuropsychological assessment metrics (v4) traumatic brain injury battery: military normative data”, Military Medicine, Vol. 177 No. 3, pp. 256-269.

Wand, K., Humble, S. and Wilson, R.J.T. (1993), “Explicit modeling of detection within a stochastic duel”, Naval Research Logistics (NRL), Vol. 40 No. 4, pp. 431-450.

Washburn, A.R. and Kress, M. (2009), Combat Modeling, Vol. 139, Springer, New York.

Woodaman, R.F. (2000), “Agent-based simulation of military operations other than war small unit combat”, Doctoral dissertation, Monterey, California.

Acknowledgements

Funding: Report No. 22-17 was supported by the Office of Naval Research under work unit no. N2027.

Corresponding author

Rachel R. Markwald can be contacted at: rachel.r.markwald.civ@health.mil

Related articles