The next article comes from two weblog posts by Drew Breunig: “How Lengthy Contexts Fail” and “The right way to Repair Your Contexts.”
Managing Your Context Is the Key to Profitable Brokers
As frontier mannequin context home windows proceed to develop,1 with many supporting as much as 1 million tokens, I see many excited discussions about how long-context home windows will unlock the brokers of our goals. In any case, with a big sufficient window, you may merely throw all the things right into a immediate you may want—instruments, paperwork, directions, and extra—and let the mannequin handle the remaining.
Lengthy contexts kneecapped RAG enthusiasm (no want to search out the most effective doc when you may match all of it within the immediate!), enabled MCP hype (join to each software and fashions can do any job!), and fueled enthusiasm for brokers.2
However in actuality, longer contexts don’t generate higher responses. Overloading your context could cause your brokers and purposes to fail in stunning methods. Contexts can change into poisoned, distracting, complicated, or conflicting. That is particularly problematic for brokers, which depend on context to collect info, synthesize findings, and coordinate actions.
Let’s run by means of the methods contexts can get out of hand, then evaluation strategies to mitigate or totally keep away from context fails.
Context Poisoning
Context poisoning is when a hallucination or different error makes it into the context, the place it’s repeatedly referenced.
The DeepMind group referred to as out context poisoning within the Gemini 2.5 technical report, which we broke down beforehand. When enjoying Pokémon, the Gemini agent would often hallucinate, poisoning its context:
An particularly egregious type of this situation can happen with “context poisoning”—the place many elements of the context (objectives, abstract) are “poisoned” with misinformation in regards to the recreation state, which may typically take a really very long time to undo. In consequence, the mannequin can change into fixated on attaining unattainable or irrelevant objectives.
If the “objectives” part of its context was poisoned, the agent would develop nonsensical methods and repeat behaviors in pursuit of a purpose that can’t be met.
Context Distraction
Context distraction is when a context grows so lengthy that the mannequin over-focuses on the context, neglecting what it discovered throughout coaching.
As context grows throughout an agentic workflow—because the mannequin gathers extra info and builds up historical past—this amassed context can change into distracting slightly than useful. The Pokémon-playing Gemini agent demonstrated this downside clearly:
Whereas Gemini 2.5 Professional helps 1M+ token context, making efficient use of it for brokers presents a brand new analysis frontier. On this agentic setup, it was noticed that because the context grew considerably past 100k tokens, the agent confirmed an inclination towards favoring repeating actions from its huge historical past slightly than synthesizing novel plans. This phenomenon, albeit anecdotal, highlights an vital distinction between long-context for retrieval and long-context for multistep, generative reasoning.
As a substitute of utilizing its coaching to develop new methods, the agent grew to become fixated on repeating previous actions from its intensive context historical past.
For smaller fashions, the distraction ceiling is way decrease. A Databricks examine discovered that mannequin correctness started to fall round 32k for Llama 3.1-405b and earlier for smaller fashions.
If fashions begin to misbehave lengthy earlier than their context home windows are crammed, what’s the purpose of tremendous giant context home windows? In a nutshell: summarization3 and truth retrieval. Should you’re not doing both of these, be cautious of your chosen mannequin’s distraction ceiling.
Context Confusion
Context confusion is when superfluous content material within the context is utilized by the mannequin to generate a low-quality response.
For a minute there, it actually appeared like everybody was going to ship an MCP. The dream of a robust mannequin, linked to all your providers and stuff, doing all of your mundane duties felt inside attain. Simply throw all of the software descriptions into the immediate and hit go. Claude’s system immediate confirmed us the best way, because it’s principally software definitions or directions for utilizing instruments.
However even when consolidation and competitors don’t sluggish MCPs, context confusion will. It turns on the market will be such a factor as too many instruments.
The Berkeley Operate-Calling Leaderboard is a tool-use benchmark that evaluates the flexibility of fashions to successfully use instruments to reply to prompts. Now on its third model, the leaderboard reveals that each mannequin performs worse when supplied with multiple software.4 Additional, the Berkeley group, “designed eventualities the place not one of the offered capabilities are related…we anticipate the mannequin’s output to be no operate name.” But, all fashions will often name instruments that aren’t related.
Shopping the function-calling leaderboard, you may see the issue worsen because the fashions get smaller:

A placing instance of context confusion will be seen in a latest paper that evaluated small mannequin efficiency on the GeoEngine benchmark, a trial that options 46 completely different instruments. When the group gave a quantized (compressed) Llama 3.1 8b a question with all 46 instruments, it failed, despite the fact that the context was nicely inside the 16k context window. However after they solely gave the mannequin 19 instruments, it succeeded.
The issue is, for those who put one thing within the context, the mannequin has to concentrate to it. It might be irrelevant info or useless software definitions, however the mannequin will take it into consideration. Giant fashions, particularly reasoning fashions, are getting higher at ignoring or discarding superfluous context, however we regularly see nugatory info journey up brokers. Longer contexts allow us to stuff in additional data, however this capacity comes with downsides.
Context Conflict
Context conflict is once you accrue new info and instruments in your context that conflicts with different info within the context.
This can be a extra problematic model of context confusion. The dangerous context right here isn’t irrelevant, it instantly conflicts with different info within the immediate.
A Microsoft and Salesforce group documented this brilliantly in a latest paper. The group took prompts from a number of benchmarks and “sharded” their info throughout a number of prompts. Consider it this manner: Generally, you may sit down and kind paragraphs into ChatGPT or Claude earlier than you hit enter, contemplating each essential element. Different occasions, you may begin with a easy immediate, then add additional particulars when the chatbot’s reply isn’t passable. The Microsoft/Salesforce group modified benchmark prompts to appear like these multistep exchanges:

All the knowledge from the immediate on the left facet is contained inside the a number of messages on the best facet, which might be performed out in a number of chat rounds.
The sharded prompts yielded dramatically worse outcomes, with a median drop of 39%. And the group examined a spread of fashions—OpenAI’s vaunted o3’s rating dropped from 98.1 to 64.1.
What’s happening? Why are fashions performing worse if info is gathered in phases slightly than abruptly?
The reply is context confusion: The assembled context, containing the whole lot of the chat trade, comprises early makes an attempt by the mannequin to reply the problem earlier than it has all the knowledge. These incorrect solutions stay current within the context and affect the mannequin when it generates its remaining reply. The group writes:
We discover that LLMs typically make assumptions in early turns and prematurely try to generate remaining options, on which they overly rely. In easier phrases, we uncover that when LLMs take a incorrect flip in a dialog, they get misplaced and don’t get well.
This doesn’t bode nicely for agent builders. Brokers assemble context from paperwork, software calls, and from different fashions tasked with subproblems. All of this context, pulled from various sources, has the potential to disagree with itself. Additional, once you connect with MCP instruments you didn’t create there’s a better likelihood their descriptions and directions conflict with the remainder of your immediate.
Learnings
The arrival of million-token context home windows felt transformative. The power to throw all the things an agent may want into the immediate impressed visions of superintelligent assistants that would entry any doc, join to each software, and keep good reminiscence.
However, as we’ve seen, greater contexts create new failure modes. Context poisoning embeds errors that compound over time. Context distraction causes brokers to lean closely on their context and repeat previous actions slightly than push ahead. Context confusion results in irrelevant software or doc utilization. Context conflict creates inside contradictions that derail reasoning.
These failures hit brokers hardest as a result of brokers function in precisely the eventualities the place contexts balloon: gathering info from a number of sources, making sequential software calls, participating in multi-turn reasoning, and accumulating intensive histories.
Thankfully, there are answers!
Mitigating and Avoiding Context Failures
Let’s run by means of the methods we are able to mitigate or keep away from context failures totally.
All the pieces is about info administration. All the pieces within the context influences the response. We’re again to the previous programming adage of “rubbish in, rubbish out.” Fortunately, there’s loads of choices for coping with the problems above.
RAG
Retrieval-augmented era (RAG) is the act of selectively including related info to assist the LLM generate a greater response.
As a result of a lot has been written about RAG, we’re not going to cowl it right here past saying: It’s very a lot alive.
Each time a mannequin ups the context window ante, a brand new “RAG is useless” debate is born. The final important occasion was when Llama 4 Scout landed with a 10 million token window. At that dimension, it’s actually tempting to suppose, “Screw it, throw all of it in,” and name it a day.
However, as we’ve already coated, for those who deal with your context like a junk drawer, the junk will affect your response. If you wish to be taught extra, right here’s a new course that appears nice.
Device Loadout
Device loadout is the act of choosing solely related software definitions so as to add to your context.
The time period “loadout” is a gaming time period that refers back to the particular mixture of talents, weapons, and gear you choose earlier than a degree, match, or spherical. Often, your loadout is tailor-made to the context—the character, the extent, the remainder of your group’s make-up, and your personal talent set. Right here, we’re borrowing the time period to explain choosing essentially the most related instruments for a given job.
Maybe the only option to choose instruments is to use RAG to your software descriptions. That is precisely what Tiantian Gan and Qiyao Solar did, which they element of their paper “RAG MCP.” By storing their software descriptions in a vector database, they’re in a position to choose essentially the most related instruments given an enter immediate.
When prompting DeepSeek-v3, the group discovered that choosing the best instruments turns into important when you will have greater than 30 instruments. Above 30, the descriptions of the instruments start to overlap, creating confusion. Past 100 instruments, the mannequin was nearly assured to fail their check. Utilizing RAG methods to pick fewer than 30 instruments yielded dramatically shorter prompts and resulted in as a lot as 3x higher software choice accuracy.
For smaller fashions, the issues start lengthy earlier than we hit 30 instruments. One paper we touched on beforehand, “Much less is Extra,” demonstrated that Llama 3.1 8b fails a benchmark when given 46 instruments, however succeeds when given solely 19 instruments. The difficulty is context confusion, not context window limitations.
To handle this situation, the group behind “Much less is Extra” developed a option to dynamically choose instruments utilizing an LLM-powered software recommender. The LLM was prompted to purpose about “quantity and kind of instruments it ‘believes’ it requires to reply the consumer’s question.” This output was then semantically searched (software RAG, once more) to find out the ultimate loadout. They examined this methodology with the Berkeley Operate-Calling Leaderboard, discovering Llama 3.1 8b efficiency improved by 44%.
The “Much less is Extra” paper notes two different advantages to smaller contexts—diminished energy consumption and pace—essential metrics when working on the edge (which means, working an LLM in your telephone or PC, not on a specialised server). Even when their dynamic software choice methodology failed to enhance a mannequin’s outcome, the ability financial savings and pace good points had been definitely worth the effort, yielding financial savings of 18% and 77%, respectively.
Fortunately, most brokers have smaller floor areas that solely require just a few hand-curated instruments. But when the breadth of capabilities or the quantity of integrations must increase, at all times take into account your loadout.
Context Quarantine
Context quarantine is the act of isolating contexts in their very own devoted threads, every used individually by a number of LLMs.
We see higher outcomes when our contexts aren’t too lengthy and don’t sport irrelevant content material. One option to obtain that is to interrupt our duties up into smaller, remoted jobs—every with its personal context.
There are many examples of this tactic, however an accessible write-up of this technique is Anthropic’s weblog put up detailing its multi-agent analysis system. They write:
The essence of search is compression: distilling insights from an unlimited corpus. Subagents facilitate compression by working in parallel with their very own context home windows, exploring completely different facets of the query concurrently earlier than condensing an important tokens for the lead analysis agent. Every subagent additionally gives separation of considerations—distinct instruments, prompts, and exploration trajectories—which reduces path dependency and allows thorough, impartial investigations.
Analysis lends itself to this design sample. When given a query, a number of brokers can determine and individually immediate a number of subquestions or areas of exploration. This not solely accelerates the knowledge gathering and distillation (if there’s compute obtainable), but it surely retains every context from accruing an excessive amount of info or info not related to a given immediate, delivering increased high quality outcomes:
Our inside evaluations present that multi-agent analysis programs excel particularly for breadth-first queries that contain pursuing a number of impartial instructions concurrently. We discovered {that a} multi-agent system with Claude Opus 4 because the lead agent and Claude Sonnet 4 subagents outperformed single-agent Claude Opus 4 by 90.2% on our inside analysis eval. For instance, when requested to determine all of the board members of the businesses within the Info Expertise S&P 500, the multi-agent system discovered the right solutions by decomposing this into duties for subagents, whereas the single-agent system failed to search out the reply with sluggish, sequential searches.
This method additionally helps with software loadouts, because the agent designer can create a number of agent archetypes with their very own devoted loadout and directions for make the most of every software.
The problem for agent builders, then, is to search out alternatives for remoted duties to spin out onto separate threads. Issues that require context-sharing amongst a number of brokers aren’t notably suited to this tactic.
In case your agent’s area is in any respect suited to parallelization, make sure you learn the entire Anthropic write-up. It’s wonderful.
Context Pruning
Context pruning is the act of eradicating irrelevant or in any other case unneeded info from the context.
Brokers accrue context as they hearth off instruments and assemble paperwork. At occasions, it’s value pausing to evaluate what’s been assembled and take away the cruft. This might be one thing you job your foremost LLM with or you would design a separate LLM-powered software to evaluation and edit the context. Or you would select one thing extra tailor-made to the pruning job.
Context pruning has a (comparatively) lengthy historical past, as context lengths had been a extra problematic bottleneck within the pure language processing (NLP) discipline previous to ChatGPT. Constructing on this historical past, a present pruning methodology is Provence, “an environment friendly and sturdy context pruner for query answering.”
Provence is quick, correct, easy to make use of, and comparatively small—just one.75 GB. You possibly can name it in just a few strains, like so:
from transformers import AutoModel
provence = AutoModel.from_pretrained("naver/provence-reranker-debertav3-v1", trust_remote_code=True)
# Learn in a markdown model of the Wikipedia entry for Alameda, CA
with open('alameda_wiki.md', 'r', encoding='utf-8') as f:
alameda_wiki = f.learn()
# Prune the article, given a query
query = 'What are my choices for leaving Alameda?'
provence_output = provence.course of(query, alameda_wiki)
Provence edited the article, reducing 95% of the content material, leaving me with solely this related subset. It nailed it.
One may make use of Provence or the same operate to cull paperwork or the complete context. Additional, this sample is a powerful argument for sustaining a structured5 model of your context in a dictionary or different kind, from which you assemble a compiled string prior to each LLM name. This construction would turn out to be useful when pruning, permitting you to make sure the principle directions and objectives are preserved whereas the doc or historical past sections will be pruned or summarized.
Context Summarization
Context summarization is the act of boiling down an accrued context right into a condensed abstract.
Context summarization first appeared as a software for coping with smaller context home windows. As your chat session got here near exceeding the utmost context size, a abstract could be generated and a brand new thread would start. Chatbot customers did this manually in ChatGPT or Claude, asking the bot to generate a brief recap that will then be pasted into a brand new session.
Nevertheless, as context home windows elevated, agent builders found there are advantages to summarization moreover staying inside the whole context restrict. As we’ve seen, past 100,000 tokens the context turns into distracting and causes the agent to depend on its amassed historical past slightly than coaching. Summarization can assist it “begin over” and keep away from repeating context-based actions.
Summarizing your context is simple to do, however onerous to good for any given agent. Figuring out what info needs to be preserved and detailing that to an LLM-powered compression step is important for agent builders. It’s value breaking out this operate as its personal LLM-powered stage or app, which lets you accumulate analysis information that may inform and optimize this job instantly.
Context Offloading
Context offloading is the act of storing info exterior the LLM’s context, often through a software that shops and manages the info.
This could be my favourite tactic, if solely as a result of it’s so easy you don’t consider it is going to work.
Once more, Anthropic has write-up of the method, which particulars their “suppose” software, which is mainly a scratchpad:
With the “suppose” software, we’re giving Claude the flexibility to incorporate a further considering step—full with its personal designated house—as a part of attending to its remaining reply… That is notably useful when performing lengthy chains of software calls or in lengthy multi-step conversations with the consumer.
I actually respect the analysis and different writing Anthropic publishes, however I’m not a fan of this software’s title. If this software had been referred to as scratchpad, you’d know its operate instantly. It’s a spot for the mannequin to write down down notes that don’t cloud its context and can be found for later reference. The title “suppose” clashes with “prolonged considering” and needlessly anthropomorphizes the mannequin… however I digress.
Having an area to log notes and progress works. Anthropic reveals pairing the “suppose” software with a domain-specific immediate (which you’d do anyway in an agent) yields important good points: as much as a 54% enchancment in opposition to a benchmark for specialised brokers.
Anthropic recognized three eventualities the place the context offloading sample is beneficial:
- Device output evaluation. When Claude must fastidiously course of the output of earlier software calls earlier than appearing and may must backtrack in its method;
- Coverage-heavy environments. When Claude must observe detailed pointers and confirm compliance; and
- Sequential resolution making. When every motion builds on earlier ones and errors are pricey (typically present in multi-step domains).
Takeaways
Context administration is often the toughest a part of constructing an agent. Programming the LLM to, as Karpathy says, “pack the context home windows good,” neatly deploying instruments, info, and common context upkeep, is the job of the agent designer.
The important thing perception throughout all of the above ways is that context is just not free. Each token within the context influences the mannequin’s habits, for higher or worse. The large context home windows of recent LLMs are a robust functionality, however they’re not an excuse to be sloppy with info administration.
As you construct your subsequent agent or optimize an present one, ask your self: Is all the things on this context incomes its maintain? If not, you now have six methods to repair it.
Footnotes
- Gemini 2.5 and GPT-4.1 have 1 million token context home windows, giant sufficient to throw Infinite Jest in there with loads of room to spare.
- The “Lengthy kind textual content” part within the Gemini docs sum up this optmism properly.
- Actually, within the Databricks examine cited above, a frequent manner fashions would fail when given lengthy contexts is that they’d return summarizations of the offered context whereas ignoring any directions contained inside the immediate.
- Should you’re on the leaderboard, take note of the “Stay (AST)” columns. These metrics use real-world software definitions contributed to the product by enterprise, “avoiding the drawbacks of dataset contamination and biased benchmarks.”
- Hell, this whole checklist of ways is a powerful argument for why you need to program your contexts.