The Intelligence-Prompt Connection: Communicating Intent in the LLM Era
Requirements Definition Reimagined for Language Models
In intelligence analysis, clarity of purpose has always been paramount. The most sophisticated collection systems, brilliant analysts, and responsive dissemination networks all falter without a clear understanding of what question needs answering. This fundamental challenge—requirements definition—sits at the heart of the intelligence cycle for good reason. The emergence of Large Language Models (LLMs) presents a similar challenge in a new domain: how do we effectively communicate our intentions to these systems?
LLMs represent a significant departure from traditional software paradigms. While conventional applications operate on explicit, deterministic code with predictable outcomes, LLMs function through natural language—the same ambiguous, nuanced, and context-dependent medium we use in human communication. These systems don't follow hard-coded instructions; they generate responses based on probabilistic patterns learned from vast datasets of human knowledge and communication. This shift creates both new capabilities and unique challenges.
The engineering behind these models is remarkable: billions of parameters trained on trillions of tokens, sophisticated attention mechanisms, and neural architectures that process extensive context windows. Yet, despite this technical sophistication, comparatively little attention was initially paid to the human-AI interface—how users and intelligence professionals could effectively direct these systems toward specific objectives.
"The quality of an intelligence request profoundly shapes the resulting product. Similarly, the precision of prompts to LLMs determines the value of their responses. In both domains, clarity of purpose is not merely beneficial—it's essential."
This gap between backend capability and frontend usability has left many of us in a position not unlike the linguists in the film Arrival, facing an advanced intelligence with communication mechanisms that don't map neatly to our expectations. We must learn to speak a new language—not only of syntax and commands, but of intent, context, and precise signaling. Just as the film's protagonist discovers that communication with the heptapods requires understanding not just their symbolic language but their entire conceptual framework, working effectively with LLMs demands that we grasp both their capabilities and their fundamental limitations.
The parallels to intelligence work are clear. When an intelligence consumer tasks an analyst with a request, the quality of that request shapes the resulting product. Vague, ambiguous, or misaligned requirements inevitably lead to intelligence that fails to meet the actual need. Similarly, poorly crafted prompts to LLMs yield responses that may be technically correct but practically useless—or even misleading. In both domains, clarity of purpose is essential.
Think about these real-world differences in communication challenges:
With human intelligence consumers, an analyst might contend with:
Cognitive biases that shape how they frame questions (A policymaker asking "What evidence supports the threat from this terrorist group?" versus "What is the current operational capability and intent of this terrorist group?")
Information format preferences that affect comprehension (some principals prefer dense data tables while others need visual representations)
Shifting priorities that remain unstated (Publishing weekly intelligence bulletins, when in reality, decision-makers need to know if there's a threat that requires immediate action.)
Subject matter expertise gaps that lead to imprecise terminology in requirements
With LLMs, prompt engineers must navigate (among others):
Context window limitations that constrain how much background information can be provided
Instruction sensitivity that varies between models (some require explicit step-by-step guidance while others respond better to outcome-focused prompts)
Multimodal processing constraints when dealing with images, charts, or other non-textual inputs
Base model behaviors that might reflect certain reasoning patterns or knowledge cutoff dates despite explicit instructions
Prompt engineering has emerged as a critical discipline for intelligence professionals using AI. As skilled intelligence officers learn to elicit precise requirements and refine broad questions into actionable tasking, we must develop expertise in translating human intentions into formats that maximize an LLM's ability to deliver valuable outputs. Both practices require deep understanding of the underlying systems, attention to detail, and an iterative approach to refinement.
For risk intelligence professionals, mastering prompt engineering represents a natural extension of core analytical skills. The ability to frame questions precisely, provide relevant context, and specify constraints aligns with traditional intelligence tradecraft. The primary difference is the recipient of these carefully crafted communications: not a human analyst but an AI system with distinct capabilities and limitations.
Prompt engineering, like intelligence requirements definition, is both practical methodology and nuanced skill. It demands technical knowledge about model behaviors and capabilities, a grasp of core concepts like tokenization and vector embeddings, but also intuition about how language shapes understanding. It requires structured approaches and established patterns, but also creative problem-solving when standard techniques fall short. Most importantly, it acknowledges that communication—whether with human sources or artificial intelligence—is inherently imperfect and requires continuous refinement.
What’s Next
Prompt engineering best practices, research surveys, and guidebooks abound, and a comprehensive list is beyond the scope of this article. For those looking to deepen their skills, I recommend bookmarking these essential references: The Prompt Engineering Guide, Anthropic’s User Guides, OpenAI docs, and Google’s latest guidebook.
The more interesting question is whether prompt engineering itself will remain relevant long-term. The rapid evolution of AI applications and frameworks suggests we may be moving toward systems that handle query clarification and disambiguation automatically.

AI-driven query disambiguation and prompt optimization are already transforming search systems (like ultra-successful Perplexity) and conversational agents. These systems now leverage semantic parsing, entity recognition, and contextual analysis to clarify ambiguous queries—distinguishing between "apple" the fruit and "Apple Inc." without explicit user direction. More advanced approaches include interactive clarification, where the system identifies confusion and asks targeted follow-up questions, and attribute selection that maximizes clarity while minimizing user interaction.
On the prompt optimization front, techniques like few-shot prompting, meta-prompting with reflection, and gradient-based optimization are increasingly automated. Frameworks like DSPy now handle prompt refinement at scale, iteratively improving specificity and relevance without human intervention. These developments point toward AI agents that deliver more accurate, context-aware responses with less explicit direction from users—potentially making traditional prompt engineering obsolete for all but the most specialized applications.
Are you an intelligence or risk management professional looking to enhance your AI skills? Join the ARIM Network for our comprehensive lecture series on prompt engineering specifically designed for intelligence applications.