Measuring the Effect of Public Diplomacy
This week, the Strategic Studies Institute of the Army War College released a report by Dr. Steve Tatham on Information Operations and Strategic Communications as practiced by the U.S. Government. The paper tackles a variety of issues within the communications sphere, and includes case studies, but offers its best analysis on the hard question of metrics.
The main idea of the report is basically around of the essence of public diplomacy, which ASP defines as:
Communication and relationship building with foreign publics for the purpose of achieving a foreign policy objective.
At the crux of the paper is a critical analysis of how to understand whether or not U.S. Government communications efforts in this realm are actually helping achieve those objectives, and why.
Using several consistent themes invoking concepts of effectiveness and measurement, the paper stresses the need for scientific testing, which it contends is largely absent from U.S. influence activities.
It makes continuous reference to the importance of “Target Audience Analysis,” which it defines as:
…an empirical process in which the motivations for specific group behavior are analyzed, using qualitative and quantitative research methods.
This is essentially a variation on what I describe in public diplomacy as “listening.” As I explained in The New Public Diplomacy Imperative, “Listening involves an effort to understand the context, perspective, emotions and needs of foreign publics, including their social, economic, historical and cultural characteristics.” It allows an actor to better understand who the target audience is, what motivates them, and what causes their behavior. This is absolutely necessary when designing campaigns which are intended to influence attitudes or action. In essence, Target Audience Analysis basically applies an analytical framework to listening.
In further exploring Target Audience Analysis, Tatham argues that polling is often far too subjective to be an effective indicator. Despite its common usage, as employed, polling can contradict anecdotal evidence and paint a picture that is not reflective of the reality as experienced on the ground. This contradiction is also not unique to overseas target audiences.
While it is a mistake to assume that anecdotal evidence and polling should always coincide, a better understanding of the relation between the two can help paint a better picture of the factors at play in a given situation.
Rather than focusing on opinion polling, Tatham believes that measures of effectiveness should be based on scientific behavioral analysis—that means quantitatively measuring actual changes in behavior. This is especially important as opinion is not necessarily an indicator of behavior—though in certain cases it can be. Using Afghanistan as an example, Tatham contends that behavior is ultimately the better “measure of effect” (MOE), as it is more empirically measurable:
Either a behavior exists, or it does not. It may reduce or increase, but it is measurable. If the campaign is to grow less poppy, you can visibly see if that campaign has been successful from the air. If the campaign is to encourage greater use of, for example, Highway 611 (the major north-south route that goes from Lashkar Gah to Sangin in Helmand, Afghanistan) by private cars (thus fostering a feeling of security), you can easily measure road usage with a few strategically placed motion sensors. You could even measure accurately the numbers of calls to a hotline that led to successful arrests or locating IEDs.
But in order to understand whether the data collected demonstrates an actual difference in behavior, there must be baseline data before a communications campaign for influence is undertaken. Any difference in behavior is what Tatham describes as the measure of effect:
The key to successful MOE is two-fold. First, activity has to be properly base-lined. It is no good attempting measure behaviors, or for that matter attitude, after the IO/PSYOPS intervention if there is no record of what the behavior or attitude was prior to it.
Tatham also contends that output metrics have problematically taken the place of effectiveness in many analyses applied to the military’s efforts at public diplomacy. He explains the difference between output and effect:
Only through TAA baselining can MOE be derived. The absence of a TAA derived baseline is an immediate indicator to intelligent customers that the proposed program is unlikely to work. If any thought is given to MOE, then it is regularly in the context of measures of performance (MOP) or measures of activity (MOA). For example, the MOA associated with an airborne leaflet drop is that the necessary aircraft and equipment were serviceable and available to make a certain number of predetermined sorties. The MOP is that a specific number of leaflets or other products were dropped. The MOE, however, is the specific action(s) that the leaflets engendered in the audiences that they targeted.
What this basically explains is that activities and information do not equate influence. Just because you have broadcast a message, does not mean it has caused a group to behave a certain way or take action based on that message.It is thus inappropriate to measure output as a result.
Overall, Tatham’s report offers valuable considerations to the discourse about effective public diplomacy, strategic communications, or whatever name one wishes to prescribe. His focus on effective metrics to determine whether communications efforts are lending to the achievement of an objective puts emphasis not so much on what people think, but rather what they ultimately do—and this is key.