Consider two doctors. Both are excellent: attentive, well-trained, current on the literature. The first meets you in the consultation room and asks how she can help. She is working entirely from what you tell her in the next ten minutes, with no prior history, no record, no accumulated context. The second doctor spent the morning reading your complete medical file, noting the forums you've been quietly lurking on about fatigue symptoms, observing that your sleep-quality searches doubled over the past month, and cross-referencing it all against a pattern she's recognised in other patients. She already has a working hypothesis before you've opened your mouth.

Both doctors have the same intelligence, and only one of them has context. What separates what they can offer runs deeper than method or preparation; it reaches into the structure of what good reasoning is even possible.

That difference is also the most underexplored limitation in how people use AI today.

We have built language models of extraordinary capability and given them, by default, the weakest possible starting position: your prompt, typed cold, composed in two minutes, entirely devoid of history. They perform brilliantly anyway, which makes it easy to forget that they are reasoning with a small fraction of the information that would make them genuinely useful. The richest context any AI could have about you has been building in your browser for years. Nobody has given it to the model. At least not one that works for you.

What a Language Model Actually Does

There is a persistent and useful misconception that language models answer questions. What they actually do is complete context. A transformer-based model takes everything in its context window, all the text it can currently see, and produces the statistically most coherent continuation. The "answer" is the completion of the context formed by your question.

This matters for a specific reason: the quality of a model's output is not purely a function of its parameters but of what you put in the context window alongside your question. A model of fixed intelligence produces substantially different outputs depending on whether the context is thin (a brief, cold question) or rich (a question surrounded by relevant history, prior research, stated constraints, and worked examples). The intelligence stays constant, and the usefulness of the output scales in direct proportion to what fills that window.

1M+
The context window of leading models now exceeds one million tokens, enough to hold several novels worth of text. The technical capacity to reason over an extensive personal history already exists. What is missing is the pipeline that brings that history to the model.

Experienced prompt engineers understood this long before anyone coined the phrase. The craft lies not in asking clever questions but in constructing rich, well-ordered context. Expert users write multi-paragraph system prompts, paste in relevant documents, include examples of the output they want, stage the problem carefully before asking it. They are doing manually what a good information system would do automatically: filling the context window before the question arrives.

Browsing history is that information system, already written, already ordered chronologically, already far more specific and accurate than anything you would produce by describing your interests to an AI from scratch.

The Prompt as an Incomplete Portrait

Think about how you actually research something that matters to you. You don't ask one question and act on the answer. You browse for days or weeks. You return to the same topic from different angles. You check a price, abandon a comparison, pick it back up after a conversation changes your thinking. You spend fourteen minutes on one article and four seconds on the next, which is its own kind of signal. You read three pieces making the same argument and one that doesn't, and the dissenting one is the one that shifts you.

That pattern is not noise; it is a detailed record of how you think, where your uncertainty sits, and what you have already ruled out. A language model given only your final question is like a biographer handed the last page of a diary. The question "what mortgage deal should I take?" contains almost no information on its own. Six weeks of research across fixed rates, variable options, early repayment clauses, and property valuations in two specific postcodes contains a great deal. The question is identical in both cases. What the model has been given before it is asked, however, is the thing that changes what it can offer.

Most people treat their AI like a search engine. You arrive with a question, get an answer, and leave. The history of how you arrived at the question, which is where all the real signal lives, vanishes the moment you close the tab.

There is a certain irony in where the context goes instead. The browsing record of your property research, every page loaded, every tab opened, every calculator used, is broadcast in real time to the advertising ecosystem, processed by behavioural targeting algorithms, and used to infer your purchase intent with enough accuracy to be a profitable commercial signal. The system that uses your context most aggressively is not the one trying to help you; it is the one trying to sell you something before you have consciously decided you want it.

Twenty Years of Predictive AI, Just Not Yours

The behavioural targeting industry built the world's most sophisticated applied system for context-accumulating prediction, and it has been operating at scale since the early 2000s. It knows you are in the early stages of considering a new car because your browsing pattern from three weeks ago matches the accumulated pattern of millions of other people who subsequently bought one. Rather than waiting for you to ask, it acts on a prediction derived from context you generated and never consented to have used this way.

72
The number of Facebook likes that researchers found was sufficient to predict personality traits, conscientiousness, openness, political affiliation, relationship status, with greater accuracy than most of a person's human friends. Browsing history is orders of magnitude richer than social engagement data.

This was never presented as AI, because the word AI was not in fashion when most of this infrastructure was built, but that is what it is: a system that takes your behavioural context as input and outputs a probabilistic prediction about your future actions, used to influence you before you have consciously made a decision. The sophistication of that system came from the depth and continuity of the context it was operating on, not from the inference engine itself.

The question that has barely been explored is what happens when you flip the orientation. The same context, the same predictive inference, except this time the beneficiary is the person who generated the data rather than the company that collected it without asking. The technical problem is largely solved. The political and architectural challenge of keeping that context under your control is the one that hasn't been.

What Changes When Context Is Continuous

Session amnesia is the defining limitation of current AI assistants. Within a conversation they are extraordinary: they track every turn, build on prior exchanges, refine their understanding as the dialogue develops. Close the tab, open a new one, and they reset entirely. You are a stranger again, and the model that spent the last hour building a working picture of your situation has no record that you ever spoke.

Continuous context changes the category of what is possible. Not just richer answers to questions you ask, but the model recognising patterns you have not consciously noticed yourself. You have been checking the same flight price for six weeks. You have been gradually moving your research toward one neighbourhood over another. You have returned to the same topic in a pattern that suggests you haven't found the right framing for the decision yet, not because you're indecisive, but because you're still missing one piece.

The distinction between proactive and predictive is worth holding clearly. A proactive AI acts without being asked, based on a trigger or a timer. A predictive AI knows what you're likely to need before you articulate it, based on a pattern recognised from accumulated context. Proactive behaviour can be designed in advance for a general population. Predictive behaviour requires a genuine model of who you are, built from accumulated evidence of how you actually operate. A calendar reminder is proactive. Knowing you are about to run out of a prescription before you do is predictive, and it requires continuous observation rather than a rule.

Continuous context is what makes the second thing possible: not simply richer answers to questions you type, but a model that already understands what you are working through, which decisions are unresolved, which information gaps you keep returning to fill. The prompt shifts from being the starting point of the interaction to being the final clause of a much longer context that was already there.

The Architecture of a Better Prompt

The practical infrastructure for this is less exotic than it sounds. Browsing data describes your interests, your research state, your purchase intent, your attention patterns. It requires not surveillance-grade access to new information but a system that collects what your browser already knows, structures it coherently, and makes it available as context before the question is asked.

This is part of what Prism is building. The browsing data that currently feeds advertising algorithms is captured locally first, encrypted under your control, and made available as a context layer for AI inference. Instead of explaining your situation to a model from scratch, the model has already read three months of relevant behaviour. The question you ask is the final page of a file it has been through. The improvement in what the model can offer is not a matter of surface polish.

The privacy architecture is not incidental to this; it is the condition that makes the whole project worth doing. A context-fed AI that reports to a third party is the existing advertising system with a smarter engine. The only version of this worth building is one where the context accumulates in your wallet, reasoned over by a model working in your interest, without the data leaving your control.

The Next Phase of Prompt Engineering

Prompt engineering had a productive run. The craft of constructing precise, well-staged questions to extract useful responses from language models became a genuine specialisation, with real technique behind it. It was the right response to a real limitation: models without context need better questions.

The next development is not better questions but better context. A model situated in your actual browsing history, populated with your real research patterns, embedded in your life rather than a blank conversation window, does not need you to be a skilled prompter. It needs access to what you have already been doing. The skill of asking well becomes less important as the supply of relevant context improves.

That shift, from prompt engineering to context engineering, is probably the most significant change in how AI becomes useful at the personal level over the next few years. Most of the hard technical problems are already solved, and the remaining challenge is architectural: finding a way for the context to accumulate under your control rather than someone else's. That question has proved easy to state and inconvenient to answer, which is precisely why it remains the most interesting one to work on.