Imagine a research assistant who is extraordinarily talented. Sharp, thorough, capable of reasoning across fields, patient with ambiguous questions, honest about what she doesn't know. You rely on her constantly. But she has one condition: every time you open the door to her office, she has no memory of you. Not the project you've been working on for six months. Not the thirty conversations that shaped your understanding of the problem. Not even your name. She is brilliant, and she meets you every single time as a stranger.

This is, more or less, the current state of AI assistants. Within a conversation they accumulate context with impressive fidelity, building on each exchange, refining their model of what you need as the dialogue develops. Then you close the tab. The next morning, the office door opens, and she introduces herself again.

The difference between what AI assistants can do within a conversation and what they could do with continuous context across many conversations runs deeper than a technical limitation; it describes a different kind of tool entirely.

The Reset Button No One Asked For

Session amnesia is an architectural choice, not an intrinsic property of language models. Models forget between conversations not because the capability is absent but because they were never designed with persistent memory in mind, and because the technical and commercial challenges of building it properly have, so far, outweighed the incentive to solve them. Those challenges are genuine: questions about what to store, how long to store it, and who owns it do not resolve themselves easily. The path of least resistance is to store nothing and ask the user to re-establish context every time they need the model to understand them.

The cost of that decision is mostly invisible to the people making it, because those people are not the ones who spend twenty minutes reconstructing their situation at the start of every session. Users absorb that friction as a normal feature of the tool. You repaste the document. You re-explain the project constraints. You re-establish that you already tried the obvious solution and it didn't work. The assistant is excellent once it understands the problem. Re-establishing that understanding is your job, every time.

A language model tracks everything within a conversation and nothing across conversations. That boundary was drawn by product decisions, not by the limits of the technology.

What this means in practice is that the longer and more complex a problem, the more the user pays. A simple factual query costs nothing to re-ask. A six-month research process costs enormously to re-describe. The users who would benefit most from an AI that knows them, people working through long, evolving problems with many moving parts, are the ones most penalised by the reset.

What Behavioural Prediction Actually Looks Like

There is an instructive comparison to be made with the advertising industry, not because advertising is a model to emulate but because it demonstrates what is technically possible when the incentive to persist behavioral context is strong enough.

The ad-tech infrastructure that developed through the 2000s and 2010s was, in its essentials, a system for accumulating and reasoning over behavioural context at scale. Not context you consciously provided, but context you generated passively through your browsing. Every page request, every search query, every return visit to a product page, every second spent on a piece of content that you chose not to share or like but that you clearly read closely: all of it fed a model whose job was to produce one output. The most accurate prediction of what you might want next, delivered before you asked for it.

5,000+
Estimated data points collected per person by major ad brokers, compiled across browsing history, location data, purchase records, and inferred attributes. The behavioral record that advertising built on you is, by any measure, far richer than the context you have ever given an AI working in your interest.

This was predictive AI operating for two decades before the phrase entered common use. Rather than waiting for you to ask a question, it inferred what you were working toward from the pattern of your behaviour and acted on that inference, usually by showing you an advertisement, but increasingly by surfacing content, ordering search results, and shaping what you encountered before you had consciously articulated what you were looking for.

The sophistication of that system was never primarily about the model. It was about the depth and continuity of the context it was operating on. A modest inference engine with continuous behavioral data substantially outperforms a sophisticated inference engine working from a cold start. That lesson was learned in advertising, and it has not yet been applied to AI that serves the interests of users rather than advertisers.

Proactive vs Predictive: A Meaningful Distinction

A clarification that becomes important here is the difference between proactive and predictive behavior in AI systems. They sound similar and are often conflated, but they describe very different capabilities with very different requirements.

A proactive AI acts without being explicitly asked, based on a trigger or a predefined rule. Your calendar app sends you a reminder before a meeting. A smart home system turns on the lights at dusk. Your AI assistant sends you a daily briefing at seven in the morning. These are all genuinely useful. None of them require the system to know anything specific about you beyond a rule that was configured in advance. They are proactive in the sense that they act without a direct instruction, but they are scripted: the behavior was specified ahead of time by someone who had a general use case in mind, not a particular person.

A predictive AI works differently. It recognises patterns in your specific behaviour that you have not consciously noticed or articulated, and surfaces something useful based on that recognition, before you knew you needed it. It notices you have been checking the same flight route every few days for six weeks and flags a price drop. It observes that your research has been circling the same unresolved question for a month and surfaces the framing that might close it. It registers that your reading pattern shifted two weeks ago in a way that historically precedes a particular kind of decision for you specifically. None of this is scheduled or rule-based. It requires actually knowing you, over time.

The gap between these two things is the gap between a good alarm clock and the doctor who read the file before you walked in. A proactive AI is a well-configured feature; a predictive one is something closer to a relationship, and it requires continuous context in a way that scripted behaviour simply does not.

The Information That Already Exists

The raw material for predictive AI at the personal level is not exotic or expensive to generate; it is already being generated, continuously, by everything you do online. Browsing history is the richest behavioural record a person creates in their daily life. It captures not only what you were interested in but the structure of how you arrived at that interest: the searches that preceded the pages, the pages that led to other pages, the topics you returned to and the ones you left behind. It records duration, which is a proxy for engagement that social-media likes entirely fail to capture. It preserves the trajectory of decisions in a way that almost no other data source does.

The problem is not availability but who is using it. The behavioural record that advertising platforms have built on you over two decades was generated by you and captures something genuine about your interests, your anxieties, your intentions. It has been used, relentlessly and efficiently, to influence your behavior in directions that are commercially convenient for people who are not you. The same data, under different architecture, is the foundation of an AI that could genuinely anticipate what you need and help you before you think to ask.

87%
Proportion of Americans who say they would like more control over how their personal data is collected and used, per Pew Research. The appetite for a different architecture is there. The infrastructure that delivers it has been slow to follow.

What is required is not new technology but a reorientation of who the beneficiary is. The behavioural prediction machinery exists. The question is whether it can be rebuilt so that the predictions serve the person who generated the data, rather than the company that collected it without asking.

What a Context-Aware AI Would Actually Do

The examples that illustrate this best are drawn from experiences people have already had, imperfectly and sporadically, with AI assistants when they happened to re-establish enough context to make them useful.

Consider someone who has spent two months researching a career change. They have read extensively, compared industries, thought through financial implications, had the same conversational loops with friends who don't quite understand the specifics. An AI that has access to that research history does not need the problem re-explained. It already knows which variables are settled and which are still in motion. It recognises that the unresolved question is not about the new role itself but about a specific financial threshold that keeps appearing in the research. It can surface the relevant information before the question is typed, because it has been watching the pattern that leads to that question for weeks.

Or consider someone who has been tracking a slow-moving health concern, reading across symptoms, trying to build an accurate picture before a medical appointment. An AI that has followed that research can see the gaps in it. It notices the sources consulted were consistently from one school of thought and can surface the counterargument. It can flag the specific question that the research keeps approaching but never directly addresses. The distinction from diagnosis matters here. What is described is awareness of the shape of someone's research process, not of the medical question at the centre of it, and that awareness makes the process far more efficient than any individual session could achieve.

None of this is speculative. These interactions already happen, partially and accidentally, whenever a user manages to re-establish enough context that the model has something real to work with. What limits them is not the model's capability but the absence of any system that accumulates and carries that context forward automatically.

The Architecture That Makes It Worth Having

There is a version of context-continuous AI that would be straightforwardly terrible: a system where your behavioral history accumulates in a server you do not control, is used to infer your intentions with high accuracy, and is operated by a company with interests that do not align with yours. That description fits the current advertising infrastructure almost exactly, and it is not a model to replicate.

The version worth building differs in one architectural respect: the context accumulates under your control, encrypted with keys you hold, accessible to AI inference on your behalf without the raw data leaving your custody. The predictive capability is derived from your history, and the predictions serve you rather than being sold about you.

This is what a small number of tools are beginning to work toward, and it is genuinely hard to get right. Prism is one of them. The premise is that the behavioral context your browser generates each day is captured locally, stored in an encrypted wallet that only you can access, and used as the input for AI inferences that surface patterns and insights on your behalf. Not to target you, not to build a commercial profile, but to give you the thing that advertising infrastructure accidentally proved was possible: an AI that knows what you are working on and can anticipate what you need.

The amnesiac assistant is a consequence of building AI without solving the context problem, not an inevitable property of the technology. Solving it requires building something that the current commercial architecture of the web was specifically not designed to provide: a personal behavioural record that belongs to you. That is the harder problem, and for that reason it is the one worth working on.