Proficiency Versus Effectiveness: What Readiness Is Not

Redefining Readiness Topic Week

By Jesse Schmitt

As I write this in late April, America’s National Football League (NFL) Draft season is in full swing. Around the country, hundreds of athletes showcase their athletic prowess in tests and combines, thousands of journalists, scouts, and coaches observe every metric they can find, and millions of football fans greedily consume every rumor, analysis, and mock draft they can find in the “content desert” that is the football offseason. And every year, a certain class of player emerges. These athletes are genetic freaks, even among their impressive peer group. They run faster, leap higher, and throw farther than everyone else. They emerge from the NFL combines and pro days to mass media acclaim, with the eye-popping numbers to back it up. Among these standouts, a given NFL team will find a “favorite” and the team’s ownership, coaches, or staff will invariably fall in love with their favorite’s potential and draft them in an early round, signing them to a multi-million dollar, multi-year contract. Just as invariably, year after year, many teams’ favorites on draft day will be disappointments when the NFL season kicks off. 

That is not a foregone conclusion, and a number of hyped early-round choices go on to successful NFL careers. But many more of them flame out; their athletic gifts do not provide them the same advantage in their professional careers as they had at the college level. It turns out there is more to being a star linebacker or running back than simply running fast and jumping far. Of course, no NFL scout can resist claiming they predicted such an outcome. “Yeah, his numbers were good,” they say knowingly, “but it didn’t show up on the game tape.”

Now, imagine these athletes never had the chance to play in a live game. There is no game tape to provide context to their incredible testing numbers. Imagine a world in which players were only ever tested, clocked, ranked, and drafted but never actually took the field to play against another team for a trophy. Each team has no schedule, but must be ready to play a game at any time. Could they truly project their ability to win by listing how strong and fast each player is?

The set for the 2010 NFL Draft at Radio City Music Hall in New York City. Photo By Marianne O’Leary – originally posted to Flickr as “NFL Draft 2010 Set at Radio City Music Hall”, CC BY 2.0, via Wikimedia commons. NFL, NFL Draft, NFL teams, and other logos are registered trademarks of the National Football League and the respective teams.

The U.S. Department of Defense is like an NFL franchise that, for lack of any game tape, relies solely on the athletic testing numbers to determine how ready their players are for the next game. Internal metrics of success, as the only numbers available, take on an outsized importance in predicting battlefield success. But anyone with even a small amount of military (or athletic) experience understands that there is more to winning than simply being the most proficient person on the field. A military, like a football team, has to be effective, as well.

In common usage, “proficient” and “effective” are nearly synonymous, but the distinction between them is critical to understanding how evaluating military forces must change. “Proficiency” is being competent or skilled at a task, whereas “effectiveness” is something that successfully produces a desired or intended result. The reason they are so easily conflated is that in many cases, they are the same: when a young private finds his weapon is jammed, his proficiency at clearing his rifle should yield the effective result of getting back into the fight. 

To continue the athlete analogy, proficiency is testing well at the combine; effectiveness is catching touchdowns and making tackles in a game. There is a connection between the two, but a loose one, depending on the position and individual player. 

Warfighting is not a task where proficiency necessarily correlates directly to effectiveness. Furthermore, there is no established metric for measuring battlefield proficiency. The U.S. has not fought an industrialized peer competitor since 1945, and even if it had, such proficiency would be only tangentially relevant in the hyper-connected information competition space of 2021 and beyond. 

The U.S. military’s current fixation on “readiness” as a term is more focused on proficiency than effectiveness. Readiness has its roots in Congressional hearings: the Pentagon, ever vigilant of Congressional budgetary oversight, justifies budget requests and expenditures with metrics such as the number of exercises performed, deployments made, and man-hours spent on maintaining and operating systems. 

Moving down the chain, tactical units report similar metrics to the Combatant Commands, who are always hungry for data. Each echelon reports their own measures of performance because the appetite for budgetary justification is omnipresent and unforgiving. These are not selfish or incompetent actions on anyone’s part—it is simply the way that incentives have been structured. 

What is not being asked are the more important, but difficult to measure, questions: Are the actions being taken actually effective? Is the force actually capable of successfully producing the desired outcome, which is victory in a full scale armed conflict? For that matter, is the goal of a modern military solely to win a declared conflict, or is it to compete effectively with global and regional adversaries before overt hostilities? Time will ultimately tell if “Readiness” actually reflected the ability to succeed, but there is also plenty more that could be done now to determine that answer before relying on hindsight.

Generals Brown and Berger explained this same problem in their recent War on the Rocks op-ed “Redefine Readiness or Lose.” These veterans of high-level DoD decision-making charged that “readiness” could not just be synonymous with “availability”, as it stymied the incentives for long-term transformational actions in favor of maintaining current systems and strategies to maintain a fictional sense of omnipresent readiness. Put slightly more irreverently, the driving incentive for too much of the DoD is presenting green (i.e., “ready”) metrics on PowerPoint slides to their boss, who can report green metrics to their boss, so on and so forth, all the way to Congressional Armed Services Committees. It is important to reiterate: these are not the results of incompetent or cowardly individuals, they are simply the result of the systemic incentives as they exist today.

This critique is not new, nor is it particularly challenged by most military leaders today. What remains, though, is a deafening vacuum for what the solution must be. The fact that change is needed is widely accepted; what that change should be is undetermined. The challenge, as always, is capturing a subjective measurement in appropriately objective metrics for data collection. How can one tell what a “ready” unit is? Up until now, the U.S. has relied far too much on measurements of proficiency. Units report their training activity continuously, and the most “ready” unit is the one that most recently completed a standard list of operations and tasks. The U.S. military is an athlete that runs an incredible 40-yard sprint, jumps through the gym’s roof, and throws the ball a mile and a half, but has never analyzed the game film to see how effective it really is.

The concept of U.S military “game film” is problematic for two reasons: first, all clear-thinking Americans should hope and pray never to actually play the “game” for which the military trains. Military leaders should leverage “practice film,” to keep the analogy intact: how well does the military play in a setting as close to the game as possible? 

Second is that the challenges faced by the U.S. in its most recent “scrimmage” film from Vietnam, Iraq, and Afghanistan paints an ugly picture about its readiness to handle asymmetric challenges. The U.S. showed up to the game with almost every imaginable advantage—the most athletic (read: proficient) players, high-tech uniforms, a full roster, coolers full of Gatorade, air support, and scores of expert coaches to lead them. The opponents arrived with fewer players, some outdated but simple equipment, and home field advantage. And to be frank, they won, because both teams had prepared to play “football” and the U.S. team had not expected to find itself in a soccer match. The U.S. military, in the process of measuring their proficiency and determining themselves “ready” for conflict, had diminished their ability to adapt once they understood the reality of the game. That those failures were both military and political in nature prove that sometimes the front office drafts a player who just is not a good fit for the game, regardless of the numbers on draft day.

There have been positive recent developments towards getting more practice film, including a significant push for more wargaming across the naval services, and numerous large scale force-on-force exercises at Marine Corps Air-Ground Combat Center, Twentynine Palms. These efforts are laudable and should be reinforced, but they are not solutions in and of themselves. Wargames and exercises are laboratories, where experiments can be run with unproven pieces of gear and newly-developed tactics. What they must become is an actual testing ground, wherein a force proves its readiness in a subjective manner. “Ready for what?” is the obvious rebuttal, and one of the questions posed in the aforementioned generals’ op-ed. The answer should be “problem solving.”

This is the precise point where the concept of effectiveness must be most divorced from the concept of proficiency. The problems faced by deployed military forces around the globe today do not have a proven solution. There is no handbook or checklist that describes how a decision maker should counter malign narratives and compete with undeclared special operations forces. A modern military unit, more than any other time in history, must be able to find solutions to unexpected problems. Proficiency is an element of that—a robust patrolling effort will be doomed from the start if none of the squad leaders can navigate on a map—but planners and leaders who can recognize the effectiveness of a potential action are the truest measure of readiness. A simple example might be a staff that can account for whether patrolling through the town is useful or if it will simply feed a hostile propaganda campaign. Measuring a unit’s ability to solve unknown problems is how the force should move forward in marrying proficiency with effectiveness.

This is, of course, far easier said than done. The current model of performing a list of tasks to a certain standard prior to deployment is nearly antithetical to evaluating effectiveness (though not proficiency, which is not useless, just incomplete on its own). The DoD has honed the current readiness model through decades of conflict and painful lessons in Iraq and Afghanistan; slaughtering that sacred cow is a tall order. Instead of outright replacement, priorities must be realigned to allow for testing and evaluation of decision-making and problem-solving skills in a tactical setting. It will entail the hard decisions of identifying which areas of proficiency are irrelevant. Time is the ultimate constrained resource, so deliberate thought must be put into what proficiency metrics are absolutely essential. But if the metrics that determine readiness change, and leaders communicate that appropriately, then the demand signal for objective proficiency metrics will automatically fall. This challenge is one of risk acceptance and messaging at all levels of command, but it is the cultural shift called for by Generals Berger and Brown.

Finally is the difficult task of testing problem solving skills. One answer to the question of how to evaluate problem solving is deceptively simple. Wargaming, in certain forms, is perhaps the best form of measuring the ability to solve problems. For relatively low costs in manpower and materials, war games allow commanders and staffs to test their ability to move through the Observe-Orient-Decide-Act loop. The game might be intricately detailed to stress a staff’s ability to find and retain critical information, or it can be as simple as a verbal brief. The point is that the staff must demonstrate the ability to analyze and synthesize new information from incomplete or uncertain information environments. They must study, understand, and enhance their ability to think through new problems, rather than solve old ones more and more proficiently. It is the equivalent of quizzing a quarterback on what he would do if the defense aligned in an unexpected formation. His ability to throw accurately is rendered meaningless if he cannot understand the environment and make the right decision of who to throw to.

If anything defines the competition continuum, it is uncertainty. This applies across tactical, operational, and strategic echelons, which can be addressed by corresponding changes to the war game structure. Tactical units can maneuver forces on a map, with a dedicated red cell providing a thinking, adaptive enemy. Strategic war games could range from discussing diagrams on white boards all the way to advanced simulation software emulating the actions and opinions of entire national populations. 

The question of “how” can be addressed in a number of specific ways, depending on the specific nature of the unit or headquarters being tested. The point is that for a unit to be ready in any modern and meaningful sense its problem solving ability must be tested and evaluated. This will be subjective, to a certain extent, and will require a corresponding cultural shift in risk acceptance and understanding by commanders. Moreover, the specific training and evaluation events need to be created, though many elements within the DoD have already set a firm foundation for such a campaign. But until effectiveness is valued as much as proficiency is, the U.S. military is at risk of finding itself woefully unprepared when the game kicks off.

Captain Jesse Schmitt is the Assistant Intelligence Officer for the 31st Marine Expeditionary Unit in Okinawa, Japan. He earned Bachelor’s Degrees in Political Science and Economics from the University of Florida, and is currently completing a Master’s Degree in International Relations. He firmly believes that if the U.S. ever wins both the Men’s and Women’s World Cup, the rest of the world should have to call the game “soccer” until the next tournament.

Featured image: GULF OF ALASKA (May 7, 2021) – F/A-18F Super Hornets, assigned to the “Black Knights” of Strike Fighter Squadron (VFA) 154, and the “Tomcatters” of Strike Fighter Squadron (VFA) 31 are secured to the flight deck of the aircraft carrier USS Theodore Roosevelt (CVN 71) May 7, 2021, in support of flight operations above the Joint Pacific Alaska Range Complex and Gulf of Alaska during Exercise Northern Edge 2021 (NE21). (U.S. Navy photo by Mass Communication Specialist 3rd Class Brandon Richardson)

2 thoughts on “Proficiency Versus Effectiveness: What Readiness Is Not”

  1. Very good points.

    The fact of the matter is, however, that effectiveness can only be measured in the conduct and outcomes of actual battle, and there is nothing we can do to prepare pre-conflict other than to try to provide as much operational and doctrinal flexibility as we can, so that we can adapt to the new battle. In part because the enemy always gets a vote, and we never know in advance how well the enemy will perform or what his battle plan consists of, at least at the beginning of a conflict.

    This is evidenced in the very old but true saying that “the generals/admirals are always preparing to fight the last war, not the next war”.

    Thus, a military’s effectiveness is likely to be low initially in a conflict, especially when they are not the initiating force (attacker, or invader). Then the question becomes, how well do we learn from dealing with the enemy? If we stubbornly stick to old shibboleths of military proficiency, we’re likely to be slow learners. If we are mentally prepared to cast off everything we thought we knew in response to new knowledge, then we can be very fast learners.

    In World War Two, we started out with some preparations, because we had already observed the war’s first two years when we were not engaged, at least directly, in the fighting. So we understood the armored maneuvering force of “blitzkrieg” and the value of aircraft in sinking heavy battleships and the value of submarines in interdicting enemy lines of supply and communication. Even so, we were still caught flat-footed in Pearl Harbor, and the Philippines, and in the interdiction of our own merchant transports off our own Atlantic and Gulf coasts … and in our Army’s poor performance at Kasserine Pass. US military forces were on the whole very ineffective during the first six months of WW Two ending in mid-1942.

    But we learned quickly. Poor senior but peacetime-trained field commanders and ship commanders got replaced. Weapons improved. Tactics improved. Intelligence improved. By mid-1942 the initial massive losses were stemmed, and by late the following year the US and our allies were on the offensive, even though it took another nearly two years to achieve final victory.

    Unfortunately, that’s what it takes. The only way to really learn how to win wars effectively is to fight the wars, and be prepared for high velocity learning and a willingness to jettison old ways of thinking, old weapons, and ineffective commanders.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.