HACKER Q&A
📣 marcospassos

Building LLM apps? How are you handling user context?


I've been building stuff with LLMs, and every time I need user context, I end up manually wiring up a context pipeline.

Sure, the model can reason and answer questions well, but it has zero idea who the user is, where they came from, or what they've been doing in the app. Without that, I either have to make the model ask awkward initial questions to figure it out or let it guess, which is usually wrong.

So I keep rebuilding the same setup: tracking events, enriching sessions, summarizing behavior, and injecting that into prompts.

It makes the app way more helpful, but it's a pain.

What I wish existed is a simple way to grab a session summary or user context I could just drop into a prompt. Something like:

const context = await getContext();

const response = await generateText({ system: `Here's the user context: ${context}`, messages: [...] });

Some examples of how I use this:

- For support, I pass in the docs they viewed or the error page they landed on.

- For marketing, I summarize their journey, like 'ad clicked' → 'blog post read' → 'pricing page'.

- For sales, I highlight behavior that suggests whether they're a startup or an enterprise.

- For product, I classify the session as 'confused', 'exploring plans', or 'ready to buy'.

- For recommendations, I generate embeddings from recent activity and use that to match content or products more accurately.

In all of these cases, I usually inject things like recent activity, timezone, currency, traffic source, and any signals I can gather that help guide the experience.

Has anyone else run into this same issue? Found a better way?

I'm considering building something around this initially to solve my problem. I'd love to hear how others are handling it or if this sounds useful to you.


  👤 matt_s Accepted Answer ✓
Interacting with LLMs or AI APIs sounds like other software patterns, it doesn't matter that its AI or an LLM really, you are calling a function and providing inputs and expecting output. You get better output when your inputs are tuned to the scenario. Some of your inputs in this paradigm could be considered as optional parameters because you still get output without them.

If you need to remember parts of the inputs in between user sessions then you need to save state of those somewhere to a disk. Databases are a common choice especially in web development but you could also just put things in a file. Another option if this isn't a web development context is to use something like sqlite since it will help organize the data a little better than say CSVs or similar.


👤 coolKid721
Proper usage of LLMs so you don't just flood them with useless context will just be custom tailored prompts that only include the pertinent context, with prompts saying how it's related to what you're looking for. I don't think there's a cheap way around it maybe on the plus side you can tune them using ai code. I think tools are really over used and over-rated and have had horrible experience with them, nothing beats just custom tailoring stuff and setting up a system around it.

What I do is use elixir pheonix, have a genserver keep track of the user state and I just include the related state in the request and just helper functions to generate the related prompts per type of state/context and append them wherever makes the most sense.

I think LLMs make most sense to be viewed of as singular atomic interactions where you have the whole input (prompt/context/data) and get a concrete output. Everything else just seems like people being lazy trying to avoid thinking about the best way of structuring it. Where you put the context/data and how you include it will vary per prompt or the specific atomic interaction, there is no standard rule each interaction is unique. You have to experiment and see what provides the best output for each kind of request. I'd read Anthropics prompting docs if you haven't it's very good. https://docs.anthropic.com/en/docs/build-with-claude/prompt-...

My way of thinking is just viewing every isolated LLM request as a unique function that is the prompt + llm = a unique function Context is just what data you pass into the function (prompt+llm+settings(temp, etc))(data) to get whatever specific output you want. The prompt includes prewriting user/system messages, system prompt, structured output stuff or whatever. Any single request might lead to 1 or 30 of these that feed back into each other. But yeah based on that it depends on just custom tailoring them for anything, it's pretty conceptual and intellectual but I find it fun but I don't think there's any easy way around it. Having the ability to have all your requests be stateful and modify what goes into the prompt based on the current user state (like genservers/elixir makes very easy) is a nice technical thing that helps though.


👤 ProfessorZoom
I embed tons of separate pieces of information, save the vectors in a db. Embed the user's question, then have a stored procedure in the db to calculate the top 10 (or 20 or 50 depending on the model) similar pieces of information.

I have an editor where I can ask a question and it brings up the most related pieces of info, and if I change any of those pieces it will update the embedding in the db


👤 esafak
I think MCP is the right place to declare the context management API; the C in MCP is Context. As far as building goes, you could build a (universal) context store. I guess the value would be to bring the context closer to the model?

👤 max_on_hn
I don't know of anything off-the-shelf, but you could query analytics tools at runtime (e.g. Mixpanel, PostHog) to gather the raw data, and use a generic summarizer to turn that into behavioral context that's usable downstream.

👤 bilater
You might find this useful: https://context7.com/

👤 nico
I haven’t solved this, but sounds super useful!

Would love to have something like a hotjar/analytics script that could automatically collect context and then I could query it to produce context for a prompt

Great idea, you should build it. Then do a Show HN with it


👤 barbazoo
MCP maybe? You could provide tools for the LLM to discover that data at runtime.