Is war really in decline? Scholars such as Steven Pinker, Joshua Goldstein, and the authors of the 2013 Human Security Report contend that it is. In a recent article, I suggest that this claim is overstated.
To say that war is on the decline may at first appear fantastical–this statement is in tension with the daily news of beheadings, territorial incursions, and civil war. But it is backed up by sobering statistics, among them that warfare has gone from accounting for 15 to 3 percent of human fatalities over the past several millennia, and that high-intensity conflicts (those that cause at least 1,000 battle deaths) have decreased while low-intensity conflicts have increased.
The problem with these statistics is that they are based almost exclusively on a historical decline in battle fatalities that occurs simultaneously with dramatic improvements in military medicine over the past two centuries. Fewer people are dying in war, but many more are seriously wounded. War has become less fatal, but this does not necessarily mean it has become less frequent.
Four improvements in medical care in conflict zones help account for the shift from fatal to nonfatal casualties in war. Better preventive care—including widespread vaccination campaigns, improved childhood nutrition, and the development of modern field sanitation—means that soldiers are healthier (and thus less likely to succumb to any wounds sustained) when entering the battlefield. It also means that fewer soldiers will be unable to fight due to disease, thus increasing the odds of survival for their compatriots.
Basic military medicine has also improved. Today’s injured soldiers benefit from the use of antibiotics and anesthesia. They also benefit from research showing that blood loss has been the leading cause of preventable battle deaths. This research has led to practices such as the deployment of forward medics, the reinvention of the tourniquet, and generally improved hemostasis.
If a soldier is injured on the battlefield today, s/he will have a much easier time reaching a medical facility compared to past eras thanks to improvements in medical evacuation. To some extent, these improvements are mirrored in civilian life; cars can transport the injured, ill, and wounded to hospitals much more quickly than horse-drawn carriages. Medevac helicopters are another critical change in military medical evacuation practices, so much so that organizations such as NATO have adopted a “golden hour” policy, whereby soldiers are typically not deployed farther than one hour away (via helicopter) from the nearest hospital. Both modern medical care and evacuation are also increasingly supplied by NGOs such as the International Committee of the Red Cross (ICRC) and Médecins Sans Frontières, particularly in civil war zones.
Today’s soldiers also benefit from modern personal protective equipment. This equipment is designed to shield the head and the trunk—the two parts of the body most susceptible to fatal wounds. The body armor of today is a far cry from the battle dress uniforms worn by 19th century European soldiers.
These improvements in medical care in conflict zones have dramatically increased a soldier’s chances of surviving wounds today compared to the past. In this sense, these developments are a clear boon. But to the extent that those wounded in action are not “counted” as casualties, ignoring this growing population of wounded veterans may be extremely problematic for countries assessing the long-term costs of war.
Those in the declinist theory of war camp suggest an approximately 50 percent decline in war and armed conflict—measured in terms of battle fatalities—since 1946. Using an estimate that includes both those killed and those wounded, however, suggests a significantly smaller decline—one closer to 20 percent.
The overstatement of the decline in war ought to counsel caution, for at least two reasons. First, the battle death metric that underpins the evidence for the declinist theory of war is also the foundation for much scholarship on war and armed conflict. The two major datasets in this field use a battle death threshold to determine which incidents are included as wars and armed conflict. This threshold does not change over time. Improvements in medical care in conflict zones mean that there are likely several conflicts not included in the later years covered by these datasets because casualties have shifted from the dead to the wounded. Scholarship based on these datasets may therefore be biased insofar as it ignores potential recent conflicts.
Second, and more important, the requirement that fatalities be incurred in order for an event to qualify as a war may be increasingly out of sync with recent technological developments. We typically conceive of and identify wars as having at least two sides. But what if only one side suffers fatalities? US drone strikes in South Asia and the Middle East overlap uncomfortably, and significantly, with this type of scenario. Similarly (and notwithstanding reasonable skepticism regarding the possibility and probability of cyberwars), cyber attacks can constitute a type of non-kinetic conflict that occur under the radar and without direct casualties. The development of automated weapons systems, or “killer robots,” raises similar questions about the relationship between the human costs of conflict and whether we label a given dispute a “war.” These developments coincide with an era where states have stopped declaring war altogether, labeling their conflicts as “counterterrorism,” “police actions,” or “counterinsurgency” instead.
A major decline in war would be cause for celebration. To declare victory over war is, however, premature. The rise of the battle wounded and the changing nature of conflict raise serious questions about how we have measured war in the past, and how we may know it in the future.