Liquifying Your AI Conversations
A Tool for LLM Data Portability
Large language models have chronic memory loss. DeepMind's Philipp Schmid describes it like this:
Imagine hiring a brilliant co-worker. They can reason, write, and research with incredible skill. But there’s a catch: every day, they forget everything they ever did, learned or said. This is the reality of most Agents today. They are powerful but are inherently stateless.
The stateless nature of LLMs is a problem because memory is a critical component of making these tools useful. There are plenty of things that you've said to an LLM in the past that are relevant over and over again—your preferred programming language, your dietary restrictions, your taste in music. Without memory, you end up repeating yourself every time a relevant conversation comes up.
LLM providers know this, and they're trying to leverage it to make their products more sticky. ChatGPT has two memory implementations—a high-level memory store with text snippets recorded about you by the LLM, and a more fine-grained search over all your past conversations. Gemini lets you save information about yourself for the model to reference, and can reference past conversations when asked. Anthropic recently rolled out memory for Claude.
This kind of history-aware personalization is both incredibly useful (at least in theory) and a problem for consumers. Memory is useful and sticky—if ChatGPT is the only LLM that has detailed knowledge of your interactions, personality and preferences, then you're locked into that app. Weighing providers is not longer as simple as evaluating who has the best model, because jumping ship means starting from square one with an LLM that has no knowledge of you. This is not ideal, especially with so much ongoing competition among model providers. We should be able to use a bunch of different models, because the frontier is so contested and the providers all have different tools, areas of focus, strengths and weaknesses.
How do we separate user memory from ChatGPT et al.? There's a lot of work to do around a truly open, user-controlled memory store for LLM providers. But a first step is freeing context that already exists from your conversations. That's where Liquify comes in. This is a small tool I've built that lets you 1) extract your conversation history from the major LLM providers into a standardized format, 2) explore, search and filter those conversations across all providers, and 3) export that data to use however you want, with any tool you want. There's a live version here; you can also run this all locally. In either case, there is no data persistence—everything is stored and processed in your browser—and no telemetry.
What can you do with Liquify? Let's walk through a couple examples.
I bounce regularly between Claude and ChatGPT. I want an overview of useful cues in my coding preferences and styles—libraries, design patterns, bug types—that might help give me more consistent outputs. Liquify lets me search for relevant coding threads and filter to recent examples, which I can export into markdown. From those conversations, I could use an LLM to distill down a compact representation of my coding idiosyncrasies, which I could then use as a system prompt in something like a Claude Project:
General Coding Preferences
Language focus: Default to Python. Use SQL if working with databases. Only show other languages (JavaScript, Java, etc.) when explicitly asked or for comparison.
Clarity first: Provide complete, runnable examples with explanatory comments. Avoid partial code unless Nick asks for minimal snippets.
Error handling: Include basic guardrails (e.g., check for empty lists before division, handle missing files gracefully).
Keep it light: Prefer thin glue code—let frameworks or LLMs do heavy lifting where possible.
...
For non-coding activities, I might want to take a PKM approach to sorting my thoughts, or resurfacing interesting insights. So again, I could filter down to everything that I did last month and export those threads into markdown. Then I could feed that data to an LLM and get a refresher on what I was thinking about and what I might want to follow up on:
## 2) Practical systems/DevOps snippets you keep re-needing
- **Pattern:** Repeatable “how do I…?” tasks in Git and PR hygiene (split monorepo dirs, diff → Markdown, file activity stats).
- **Why it matters:** Turn these into one-command tools; they’re classic “sharp knives” for your future self.
- **Resurface ideas (quick wins):**
- **Monorepo directory split** templates (`git filter-repo`, `git subtree split`) saved as bash scripts in `~/bin`.
- **PR → Markdown diff** one-liner to paste into docs or chats (you already outlined the approach).
- **Deeper dive:** A tiny **“git aide”** repo with: split, sparse-checkout, PR->MD, top-files-touched reports—add Make targets and usage examples.
These are just a couple examples, but the goal is to allow people to do whatever they want with their conversation histories, in whatever format they choose. LLM tools are useful; they're also in many cases repositories of incredibly personal information. That information should be accessible to users in whatever form they choose. Every major LLM provider lets you export your data, and that's a start. Liquify is an effort to do more, making that data accessible and observable for more people.
From a technical perspective, this is a very straightforward Svelte app. There’s a library with a thread viewer, an exporter, and some logic to ingest and standardize the data exports provided by each LLM platform. This makes it really simple to run locally or online without requiring any server-side processing. You can review the code in the repository.
There are some limitations to this implementation. Right now, it's text only—data exports contain a range of other object types, depending on your activity (audio, images, video, etc.), but the tool doesn't handle those. There are also particular limitations around Gemini data, because of how it handles data exports. Unlike other providers, Google doesn't give you whole threads. It just exports prompt-response pairs. So while everything is included, you don't get the benefit of seeing your conversations as they played out. And right now, the only supported providers are OpenAI, Anthropic, Google, and xAI (though contributions are welcome!).
This is an initial step toward full data portability with LLMs. It addresses utility, moving useful context around with you. We're still in need of solutions that protect our privacy more thoroughly, and this can only come from fully decentralized context stores uncontrolled by any one provider.


