Skip to content Skip to footer

[authorbox authorid=”73″ title=”The Author”]

Generally speaking, the intelligence management cycle refers to the continuous process of tasking, collecting, processing, analyzing, and disseminating intelligence information.

It constitutes the overarching element and guiding principle of the Intelligence Community in the context of government and/or military affairs and it is used in net assessment and strategic planning by “those who are often referred to, within the Intelligence Community, as intelligence “consumers”—that is, policymakers, military officials, and other decision makers who need intelligence information in conducting their duties and responsibilities”.

The intelligence cycle itself, consists of six operational activities and processes including requirements,collection, processing and exploitation, analysis and production, dissemination and consumption, and feedback.

It has been repeatedly observed that when the intelligence information is properly managed, evaluated and utilized, it can be among the most valuable tools of States and there have been many cases where Intelligence Services successfully provided policy makers with information that averted disasters on a global scale with irremediable consequences.

 Nevertheless, one should not be –naively- inclined to assume that intelligence organizations are inerrant or infallible. As a matter of fact, intelligence management can fail in more ways than one.

DEFINING THE INTELLIGENCE CYCLE: WHERE CAN IT FAIL?

Regarding the “requirements”, they are a set of elements of interest that a decision maker is requesting the analyst to research within a predefined period of time. The decision maker should clearly define the search scope as to make the cycle flow easier.

More often than not, however, the requirements are too broad or not broad enough and this can potentially lead to incorrect analysis and false inferences, since the analyst is obliged to make decisions and evaluations (based on his own knowledge and judgement) in order to convert -often incomplete- data into meaningful intelligence assessments.

Once requirements and priorities have been established, the relevant information /raw data must be collected. There are five methods of Intelligence Collection: Human Intelligence (HUMINT), Signals Intelligence (SIGINT), Image Intelligence (IMINT), Open Source Intelligence (OSINT), and Measures and Signature Intelligence (MASINT).

Some requirements are better met by specific types of collection (or the combination of more than one, depending on how much can or should be collected in order to meet each requirement). Collection failure could be regarded as the unavailability of timely and accurate information, as in the information is lacking and/or conflicting; it can also occur when the necessary data are improperly ignored, dismissed or there were no indicators it was necessary to collect such data.

Ineffective collection techniques, deceptive information from the opponent –usually a source- with the aim of misleading or deceiving the intelligence agencies could result in major collection failures as well.

It should be noted that collection produces information, not intelligence.
Processing and exploitation involves converting the vast amount of information collected to a form usable by analysts. This can be achieved through decryption, language translations, and data reduction.

Problems arise with processing and exploitation mainly due to the volume of available information and the inadequate number of analysts tasked with processing it. Thus, significant data could be overlooked or ignored because they do not fall within the exact scope of requirements.

Analysis and Production is the integration, evaluation and conversion (by subject-matter specialists) of often incomplete and conflicting data into finished intelligence reports. Analysis is a key aspect of intelligence failure as it has been made clear on many occasions that disasters could have been prevented, had the relevant data been put in the right context.

In a world of rapidly developing crises, often current/tactical issues are favored over operational/long term issues, causing analysis to become dangerously fragmented; additionally, analytical preconceptions, misinterpretations and cognitive biases can lead analysts to draw the wrong conclusions. Subsequently, the threat assessment is underestimated, overestimated or outright wrong.

Dissemination is the distribution of the finished intelligence assessments to the policy/decision makers whose requests initiated the intelligence requirements. Policy makers then take the necessary action based on the relevant information they are given and their decisions could call for more requirements, resulting in the triggering of the intelligence cycle. 

Failures in dissemination, often due to security concerns (the need-to-know requirement), practical on-the-ground difficulties, compartmentalization (inadequate inter-agency sharing), can cause ineffective communication and prevent timely and accurate information from reaching the interested parties who can act on it, leading to colossal intelligence failures.

The feedback part of the cycle estimates the degree to which the finished intelligence has been satisfactory and able to address the initial requirements. Depending on the outcome of the evaluation, further analysis or data collection may be required. However, more often than not, policy/decision makers fail to communicate their feedback or fail to do it in a timely manner (while the topic is still relevant) as to assist the analysts in the development of new intelligence assessments.

It is widely accepted that intelligence is more likely to fail at three levels: collectionanalysis and acceptance (the willingness and determination of politicians to develop their policies based on the intelligence received). It is worth noting that Richard Betts (1978) has argued on several occasions that in reality, the vast majority of intelligence failures are not due to “analysis failure” and even less due to “collection failure”; they are largely due to failure on the part of decision-makers (who are also part of the intelligence cycle), the so called “decision maker failure”.

As Handel points out in “Intelligence & The Problem of Strategic Surprise” (1984), when it comes to military strategic surprises and failures, they are -for the most part- the direct result of miscalculations and shortcomings at the level of acceptance of intelligence.

It is no secret that decision makers often tend to ignore intelligence (or even slightly distort it), if it contradicts their policy agendas or preconceived ideas (failures in direction, e.g. Stalin’s rejection of a war warning). There have been many instances where “intelligence successes” were not welcomed in policy circles and ultimately failed to prevent catastrophic policy choices.

FOUR MAJOR INTELLIGENCE FAILURES

November 2015 Paris Attacks: Widely considered the most abhorrent attack in Europe in a generation, the Paris Attacks are an example of intelligence failure and a “post-mortem evaluation” is deemed necessary in order to identify which parts of the intelligence process did not function properly. An intelligence failure can be assessed in two different levels. The first is to determine whether it was a result of strategic or operational shortcomings.

The second one is to evaluate the intelligence deficiencies that led to the failure. At the strategic level, the French Intelligence agencies were well aware of the danger that ISIS, Al Nusra and Al Qaeda represented and had pushed for more financial resources and relaxation of the Data Protection Regulation . At the operational level, which is vastly more complex, all elements of the intelligence process are examined: identification of a threat, collection, analysis and relative action.

Failure in the identification and prioritization of threats: the first and most common mistake is not identifying a threat as such or failing to place it in the right priorities framework. This can occur, on the one hand, because the intelligence apparatus is concentrated on known, specific threats deemed important and any new threats could remain undetected or pass imperceptible by the intelligence radars and on the other hand, a threat could be assessed as potential or even possible but it is not evaluated as either imminent or of significant impact.

In the Paris attacks both variants were in play, since some of the attackers were already known to the authorities, while others were not properly identified as a threat, mainly due to the lack of human intelligence and tangible evidence directly connecting the attackers to the plot.

Failure in information sharing

The Paris attacks highlighted a fatal flaw in Europe’s security structure. It is no secret that there is little intelligence-sharing among EU member States, making the collection process difficult with insufficient results and there are no shared databases on suspected terrorists. Turkey had reportedly notified France twice about one of the attackers but did not receive any feedback.

At this point, it is worth reminding that the French Authorities receive a sheer number of “tip-offs” daily, and -more often than not- there is not sufficient detail to allow for investigation. Information sharing is vital, however, when it comes to foreign-fighter mobilizations and in this case, the suspects should have been monitored both in Syria and in Europe.

If there was a common cooperation framework (among EU and NATO member states), with regularly updated suspects’ databases, relevant intelligence sharing and exchange of information, this epic failure could have been prevented.

Failure in surveillance (operational part): Following the identification of a threat, a surveillance detail (usually comprised of 15-20 people) is put in place to allow for further investigation, intelligence collection and new leads. France has close to 20.000 individuals on its national security watch list, out of whom 11.000 are radicalized (out of which 1.200 are foreign fighters).

Practically, France would need over 250.000 trained personnel to monitor ALL the suspects. Given the budgetary constraints and the limited manpower, this would not be feasible. Nevertheless, three of the attackers were known and monitored by the police and they were still able to escape surveillance on multiple occasions (including shortly after the attacks), while moving freely between Europe and Syria.

Consequently, although some of the attackers were –indeed- identified as a threat, surveillance failed to spot them both in France and in Belgium, as they were not considered “high priority targets”; this would have allowed the French authorities to get wind of the planned onslaught and possibly prevent the attacks.

9/11 Attacks

Intelligence Planning: As stated above, clear direction -the initial stage of the intelligence cycle- by policy makers plays a key role in its success. In the case of 9/11, the Bush administration was predominantly focused on Iraq and other threats including Iran, Hezbollah, etc. and failed to acknowledge the severity of the threat posed by Al-Qaeda, despite a series of attacks on U.S. targets in the ‘90s.

Richard Clarke stated that the White House “never really gave good, systematic, timely guidance to the Intelligence Community about what the priorities were at the national level”. The failure to perceive correctly the origin of the threat led to erroneous and insufficient intelligence planning that ultimately reduced the capabilities of the other components of the intelligence cycle.

Collection: The next component of the Intelligence Cycle, the collection of intelligence on the prioritized threat was “doomed to fail”, since the Intelligence Community –which was severely under resourced in manpower as well- was tasked to monitor other threats and allocated limited resources in “transnational terrorism”.

Relatedly, due to the absence of human intelligence, analysts were forced to rely on technical intelligence (intercepted communications, satellite imagery, tip-offs) and were subsequently prevented from developing insights into terrorist cells.

Intelligence Sharing : Prior to 9/11, the whole Intelligence Apparatus was differently structured and Intelligence Agencies were reluctant to share any information; All U.S. intelligence agencies had credible threat indicators –though not of sufficient specificity- that had they been juxtaposed and shared, a disastrous plot could have been foiled.

Analysis: As mentioned above, analytical preconceptions, misinterpretations and cognitive biases can lead to erroneous strategic analysis (or lack thereof) and 9/11 is a case in point. 9/11 analysis was based on conventional wisdom and normative beliefs, which made analysts unreceptive to new terrorist tactics, due to their inability to imagine things that had never occurred before. Paired with limited intelligence, it was a fatal error in the intelligence planning.

Dissemination: It is difficult to say whether the dissemination process failed or not in this case, in the sense that the Intelligence Community disseminated various general warnings but failed to “pin down” the conspiracy and –subsequently- did not present it in coherent form to the White House. In addition to the lack of specificity and of actionable intelligence, no measures were taken.

OPERATION BARBAROSSA

Decision Maker Failure / Failure in Acceptance: Operation Barbarossa (Hitler’s plan for the invasion of the Soviet Union) is a blatant case of strategic/tactical surprise and political intelligence failure. In this case, intelligence collection and dissemination cannot be said to have failed; the Soviets invested heavily in intelligence efforts and their operatives had done a remarkable job of penetrating Hitler’s political apparatus. Stalin received good intelligence from numerous credible sources (including the U.K. and the U.S.) but rejected it as disinformation and failed to act on it.

This monumental failure was not due to intelligence ineptitude; it was caused by Stalin’s inability to assess the situation correctly, his profound distrust of the British (and the West in general, which he suspected was trying to bring Russia into the war against Germany) and his megalomaniac personality.

Failure in Analysis: The Soviet leader misinterpreted the intelligence received and operated on preconceptions and cognitive dogmatism (certain of his top intelligence officials also confused the picture, confirming his erroneous assumptions, due to their leader’s violent censorship tactics).

Based on the premise that his threat perception was flawless, Stalin was deceived by German misinformation, which deliberately corroborated his faulty theory that a) Germany would not usher in a war on two fronts and b) that Hitler would not initiate hostilities without prior issuance of an ultimatum. It is widely said that Stalin acted as “his own intelligence analyst”; it is a classic example that even the best intelligence is not enough unless used effectively by decision makers.

OPERATION EAGLE CLAW

Operation Eagle Claw was an April 1980 U.S. commando operation initially meant to rescue American diplomats and embassy staff who had been held hostage in the U.S. embassy in Tehran (since November 1979). Approaching this kind of failure from a different angle -since it is a military operation- one can easily identify the strategic, operational and tactical flaws and it is safe to say that the complexity, amateurism and lack of expertise and right equipment “sealed” the mission’s fate altogether.

Strategic Planning Failure: The extremely complex nature of the operation required the cooperation of two governments (Egypt and Oman), Delta Force, Iranian collaborators, the Nimitz Task Force and Green Beret Advance Teams; the seizure of three landing zones, the organization of a major refueling operation, a 95 km drive to Tehran in borrowed trucks; in addition to that, it involved 6 transport planes, 8 helicopters, and over 100 operatives remaining in enemy territory for more than 72 hours. It is well known that the golden rule for any military operation is “fast in, fast out”, something that the operation planners did not take into account.

The complexity of the mission and the cumulative errors practically guaranteed that somewhere along the way, the Delta Team would have been inevitably compromised and thus the vital element of surprise would have been lost; or even worse, it would have led to the capture or death of the entire squad.

Inadequate equipment and operatives: The selection of the operatives and equipment were crucial to the failure of the mission. Aside from the fact that not enough helicopters were sent, the many generic shortcomings of the helicopters used –the RH-53 Sea Stallion-, which is not a combat assault helicopter and is not outfitted for the desert, complicated the operation even more, while the men picked for the mission (including the pilots) were unsuitable because they were not adequately trained for commando/clandestine operations.

Lack of solid tactical structure and contingency plan: Apart from its simplism and extemporaneousness, the mission also lacked two major elements: flexibility and a contingency plan, both vital for the successful completion of an operation. The plan was rehearsed only in ideal weather conditions (no one remotely considered the possibility of a sandstorm, which caused the first two helicopter failures) and the operatives were never trained on an actual mock-up of the Embassy compound (even if they had managed to get there, chances are they would have gotten lost inside it).

Secondly, there was no contingency plan for a rapid evacuation in case of equipment malfunction, detection, objective failure or other “imponderable factors”(e.g. the jeep incident). This, undoubtedly, contributed to the field commander’s (Beckwith) panicked reaction after the collision, since he ordered immediate evacuation, leaving behind the bodies of 8 servicemen, the helicopters and a copious amount of classified documents.

CONCLUSION

As Richard Betts argues, “intelligence failures are inevitable, because failure is primarily a result of politics and psychology, rather than analysis and organization”. If intelligence got always everything right, it would not be intelligence, it would be omniscience. Given the complex nature of intelligence work, it is impossible to avoid miscalculations, misinterpretations, or unanticipated incidents. Because above and beyond all, intelligence is and always will be informed guesswork paired with a large dose of wishfulness.