back to top

Mind Over Malware – The Psychology of CTI

Why Psychology of CTI Matters

The Analyst Brain: Pattern-Seeking, Narrative-Driven

At its core, cyber threat intelligence is an exercise in making sense of the unknown. Analysts are constantly connecting dots linking indicators, behaviors, and motivations to form coherent pictures of threat activity. This makes CTI as much a psychological task as it is a technical one.

The human brain is hardwired to seek patterns. From an evolutionary standpoint, this helped us survive detecting danger in rustling leaves or seeing animal tracks in the dirt. In CTI, this instinct drives our ability to recognize malware patterns, connect infrastructure, and identify recurring attacker techniques. It’s what allows an analyst to say, “I’ve seen this before! This looks like TA505.”

Equally important is our tendency to build narratives. When presented with fragmented data, we fill in the blanks to create meaning. We explain what happened, why it happened, and what’s likely to happen next. This narrative-building instinct helps us write compelling reports and communicate threats to decision-makers. It turns raw data into actionable intelligence.

But these same strengths can become liabilities.

The pattern-seeking brain can find connections where none exist and false patterns especially when under pressure or working with incomplete data. The narrative-driven mind can become so invested in a theory that it filters out contradictory evidence just to keep the story intact.

This is why psychology matters in CTI. It’s not just about detecting attacker behavior, it’s about understanding our own. Without awareness of these mental habits, even the most experienced analysts can fall into traps of bias, overconfidence, and misattribution.

When we acknowledge how our minds work, we don’t become weaker analysts, we become better ones. We learn to challenge assumptions, embrace structured reasoning, and improve the quality and integrity of our assessments.

Common Cognitive Biases in CTI

And How They Manifest in Team Environments

Cyber threat intelligence is often viewed as a field grounded in logic, technical evidence, and structured workflows. However, it is ultimately a human-driven discipline. Every assessment, hypothesis, or attribution is shaped by how the analyst interprets data. That interpretation is never neutral—it is filtered through the lens of experience, assumptions, and mental shortcuts known as cognitive biases. Understanding these biases is essential for improving analytic rigor and avoiding misjudgments that can lead to incorrect conclusions or missed threats.

1. Confirmation Bias

Confirmation bias is the tendency to seek out, prioritize, or interpret information in a way that supports existing beliefs or assumptions.

Example in CTI: An analyst becomes convinced that recent activity resembles APT29 and subsequently dismisses or downplays data that contradicts that attribution.

2. Anchoring Bias

Anchoring occurs when analysts rely too heavily on the first piece of information they receive, allowing it to overly influence subsequent judgments—even if the initial data is incomplete or misleading.

Example in CTI: A single PowerShell alert leads to a working theory of malware infection, anchoring the entire investigation despite new evidence suggesting an internal script was the true source.

3. Satisficing

Satisficing is a decision-making shortcut where analysts settle for the first plausible explanation that seems “good enough,” rather than exploring alternatives to identify the most accurate or complete one.

Example in CTI: In the middle of an incident, a team sees a phishing email and quickly concludes it is the root cause of a compromise—without verifying delivery, user interaction, or execution artifacts.

4. Availability Heuristic

The availability heuristic leads analysts to judge the likelihood of an event based on how easily they can recall examples of it—usually events that are recent, high-profile, or emotionally charged.

Example in CTI: After reviewing an industry alert about a widespread Qakbot campaign, an analyst begins interpreting benign login anomalies as signs of Qakbot infection because the campaign is fresh in memory.

Bias in Team Environments

Bias does not only affect individuals—it can be amplified in collaborative settings if not managed intentionally. Team dynamics can subtly discourage dissent, reinforce faulty logic, or build false consensus.

  • Groupthink: Teams may unconsciously suppress disagreement to maintain harmony, especially under pressure. When a strong voice dominates, others may hesitate to challenge the prevailing view—even if they have concerns.
  • Echo Chambers: Repeated exposure to certain threat actors or intrusion techniques can cause teams to over-prioritize those patterns and overlook new or less familiar indicators.
  • Consensus Anchoring: The first person to offer a hypothesis in a group setting often sets the tone. Subsequent analysis may conform to that hypothesis, even when divergent data is available.

Cognitive biases are not flaws, they are natural tendencies in human thinking. However, they can significantly distort intelligence assessments if left unchecked. Recognizing and accounting for bias is a fundamental skill in CTI. In the next module, we will introduce structured analytic techniques such as Analysis of Competing Hypotheses (ACH) that help mitigate these biases and improve the objectivity and defensibility of our work.

What Are Structured Analytic Techniques—And Why Do They Matter?

Structured Analytic Techniques (SATs) are systematic, repeatable methods used to improve the quality, transparency, and defensibility of intelligence analysis. Rather than relying solely on intuition or experience, SATs give analysts a process to follow when forming judgments—especially in conditions of uncertainty, limited data, or potential bias.

In short, SATs help analysts think more clearly, question more rigorously, and document their reasoning more transparently.

They don’t replace an analyst’s expertise. Instead, they help sharpen it—by slowing down thinking just enough to catch blind spots and challenge assumptions.

Why SATs Matter in Cyber Threat Intelligence

Cyber threat intelligence often involves interpreting incomplete, fast-moving, and sometimes contradictory data. The pressure to produce timely assessments can lead to premature conclusions, overlooked evidence, or misattribution. In these situations, SATs serve three essential purposes:

  • Bias mitigation: SATs introduce structure that counters the mental shortcuts and emotional reasoning that lead to error.
  • Defensibility: Well-documented SATs provide a clear rationale for judgments, which is essential when intelligence is challenged by peers, stakeholders, or leadership.
  • Collaboration: Techniques like Analysis of Competing Hypotheses (ACH) or Key Assumptions Check allow analysts to compare thinking and avoid groupthink by evaluating alternative perspectives.

When to Use SATs (Not Just for Show)

Structured Analytic Techniques are not meant to be used on every routine task. They are most valuable when:

  • The stakes are high: If an assessment could drive executive action, resource allocation, or public reporting, a SAT ensures greater confidence in the result.
  • The question is complex or ambiguous: When data points to multiple plausible conclusions, SATs help break through the noise and avoid tunnel vision.
  • There is limited or conflicting evidence: Instead of defaulting to a gut feeling, SATs make the analyst consider all sides of a scenario—especially when the data is thin.
  • You sense bias—yours or others’: When groupthink, anchoring, or premature attribution creeps in, SATs introduce discipline and transparency.
  • You want to grow as an analyst: Using SATs, especially during peer reviews or post-mortems, trains better decision-making and creates a culture of intellectual humility and rigor.

Structured Analytic Techniques aren’t just academic exercises. When used intentionally, they become tools for analytical resilience, the ability to stay sharp, grounded, and objective even in high-pressure or uncertain situations. They turn “I think” into “Here’s why.”

Why Structured Thinking Matters

When an analyst makes a judgment, how do they know it’s sound? How can they explain their reasoning in a way others can trust—and challenge?

In threat intelligence, where uncertainty is the norm and time pressure is high, it’s easy to form conclusions too quickly and defend them too strongly. That’s where ACH comes in.

ACH is a structured analytic technique designed to reduce bias, improve rigor, and force consideration of multiple explanations—not just the one that feels most likely. Rather than asking, “What fits my theory?”, ACH asks, “Which hypothesis best fits the evidence—and which ones don’t?”

The ACH Process: Step-by-Step

Step 1: Define the Problem Clearly

Start by framing the central question you want to answer. Be as specific as possible.

Example: Who is behind this credential harvesting campaign targeting financial institutions?

Step 2: Identify All Plausible Hypotheses

List all reasonable explanations—even ones that seem unlikely or inconvenient.

Example Hypotheses:

  • H1: FIN7 is responsible
  • H2: TA505 is responsible
  • H3: A new or unknown actor is mimicking known TTPs
  • H4: Internal red team activity mistaken for malicious behavior

Step 3: Gather Relevant Evidence

Pull together all facts, observations, and data points—both technical (e.g., infrastructure, malware) and contextual (e.g., targeting patterns, timing).

Step 4: Analyze Consistency/Inconsistency

For each piece of evidence, assess how consistent, inconsistent, or neutral it is with each hypothesis.
This step is typically completed in a matrix format.

EvidenceH1: FIN7H2: TA505H3: UnknownH4: Red Team
Domain registered in UkraineConsInconsNeutralIncons
C2 overlaps with TA505 infrastructureInconsConsNeutralIncons
Credential theft via fake invoiceConsConsNeutralIncons
Activity during business hours onlyNeutralNeutralNeutralCons

Step 5: Focus on Inconsistencies

The goal is not to prove a hypothesis, it is to eliminate the ones that don’t fit the facts. Hypotheses with the most inconsistent evidence are usually the weakest. This step is where bias is challenged most directly.

Step 6: Draw a Tentative Conclusion

Determine which hypothesis best fits the overall picture. Remember, you are making a judgment based on incomplete data, so confidence levels matter.

Step 7: Identify Key Assumptions and Gaps

List any assumptions made during analysis and identify which ones—if proven wrong—would impact the outcome. This is crucial for transparency and feedback loops.

Step 8: Review with Fresh Eyes

Step away. Then revisit the matrix or present it to a colleague. What did you miss? Is there a hypothesis you dismissed too quickly?

Why ACH Helps Counter Bias

ACH helps disrupt several common analytical pitfalls:

  • Confirmation bias: Forces consideration of disconfirming evidence
  • Anchoring: Prevents over-reliance on early indicators
  • Satisficing: Encourages full evaluation of alternatives
  • Groupthink: Creates a transparent, testable reasoning trail for team review

Feedback Loops and Intelligence Self-Review

Improving CTI Through Iteration, Not Assumption

In cyber threat intelligence, the work doesn’t end when a report is submitted. While it’s easy to view intelligence as a product with a clear deliverable—an attribution call, a summary of activity, or a list of IOCs—what truly separates mature CTI programs from ad hoc efforts is the presence of feedback loops. Intelligence must be treated as a cycle, not a straight line. Without revisiting and evaluating past assessments, teams miss out on critical opportunities to improve their reasoning, correct faulty assumptions, and refine future reporting.

A feedback loop is the process of revisiting previous intelligence outputs to assess their accuracy, impact, and relevance. These loops help analysts identify where their judgments were strong, where bias may have crept in, and whether their intelligence was actually useful to stakeholders. Feedback is also essential for building institutional memory—creating a record of lessons learned that can guide both new and experienced analysts toward stronger analysis.

Feedback can come from multiple sources. Incident response outcomes can validate or contradict prior assessments, offering real-world proof of how accurate initial hypotheses were. Detection engineering teams can report back on whether intelligence-informed detections triggered as expected—or failed entirely. Stakeholders themselves are a vital feedback source: Did they act on the intelligence? Did they understand the risk? Was it delivered in a format or timeframe that supported their decision-making? Even open-source intelligence and threat reporting from other vendors can help confirm or challenge prior judgments, especially when new indicators emerge or attribution is revised.

Alongside external feedback, internal self-review is a key part of the process. Self-review means stepping back to evaluate an intelligence product with fresh eyes—often weeks or months after delivery. The goal is not to critique for the sake of criticism, but to identify patterns in thought and methodology. Analysts might ask: What assumptions did we make? Did we properly weigh all available evidence, or did we favor certain data points? Were our confidence levels appropriate? Were we too early to attribute, or did we miss a chance to provide strategic context? Even short, reflective sessions can expose recurring tendencies, such as overreliance on certain sources, gaps in stakeholder alignment, or phrasing that overstated certainty.

To be effective, feedback must be normalized as a healthy part of the team culture. Intelligence professionals should be encouraged to challenge not only adversaries, but also their own processes and outputs. Regular team activities—such as post-report debriefs, quarterly retrospectives, or internal red-teaming of assessments—can make feedback part of the workflow rather than an afterthought. Keeping a living record of Priority Intelligence Requirements (PIRs) and how well reports align with them can also reveal whether reporting is staying relevant to stakeholder needs.

This process is tightly connected to the discussion on bias. One of the most effective ways to detect cognitive bias is through hindsight when outcomes don’t match expectations. Feedback loops make these mismatches visible and actionable. By reviewing previous reports in light of what actually occurred, analysts can identify where anchoring, satisficing, or confirmation bias may have influenced their thinking. Over time, this builds mental discipline and strengthens the analyst’s ability to reason objectively, even under pressure.

Ultimately, feedback loops and self-review are not about being perfect, they’re about being committed to progress. Intelligence isn’t just about getting the answer right in the moment. It’s about building a thinking process that improves with each cycle. CTI teams that take feedback seriously not only improve accuracy, but also build trust, credibility, and long-term value within their organizations.

Sharpening the Mind Behind the Intel

As you complete this Upskill Challenge, it’s worth pausing to reflect on a truth that often goes unspoken in technical disciplines: cyber threat intelligence is ultimately powered by people and people, no matter how skilled or experienced, are susceptible to bias, blind spots, and flawed assumptions.

Throughout the modules, you’ve explored how the human brain’s strengths—pattern recognition, storytelling, fast decision-making—can become weaknesses when left unchecked. You’ve seen how common biases like confirmation, anchoring, and satisficing can distort analysis, especially under pressure. More importantly, you’ve learned how to detect and defend against these tendencies through structured methods like Analysis of Competing Hypotheses (ACH) and intelligence self-review.

Structured Analytic Techniques aren’t just tools to improve assessments—they’re tools to improve analysts. They force us to slow down, challenge our thinking, and consider alternative explanations, even when one hypothesis feels “right.” They help turn intuition into insight and guesswork into defensible judgment.

Equally essential are feedback loops. Intelligence is not a one-time deliverable—it is an ongoing process of learning. By revisiting past work, comparing outcomes to expectations, and asking tough questions about what went right or wrong, we evolve as professionals and as teams. Feedback helps us replace bias with reflection, error with improvement, and assumptions with evidence.

The analyst who seeks constant improvement who questions their own reasoning as carefully as they question the adversary’s actions will always deliver stronger intelligence.

This course was designed not just to teach you techniques, but to help you build a mindset: one grounded in curiosity, humility, and analytical discipline. Whether you’re attributing threat activity, guiding detection strategy, or writing intelligence for decision-makers, you now have a set of tools that will serve you well and a framework to build on.

The next step is practice. Bias isn’t something we eliminate; it’s something we learn to recognize and manage, day after day. Use what you’ve learned here to question assumptions, apply structured thinking, and seek feedback at every opportunity. Over time, you’ll become not just a more accurate analyst but a more trusted one.


Check out Jenn’s free Upskill Challenges on JHT focused on Cyber Threat Intelligence (CTI):


Announcements

Win JHT Training!

Buy Cori Macy's "Phishing: A Technical Course for Red Teaming" before the July 1 release for a chance to win!

Yes... $1000+ in prizes for a "Name Your Price"
course costing only $10 – $50!


June Course Launch

ConDef Lite, the DIY Lab Version of ConDef 2025
Only $120 until June 30

Additional content in this category: