One of my favorite aspects of working with AI as my pairing partner is the dramatically reduced time to get back into a flow state. Long gone are the bad old days of reading some code, looking at my previous commit messages, reading some more code and slowly building back up that mental context every time I step out of a meeting.
Opening Windsurf and looking at the last set of pending changes ready to be approved, or reading my last open cascade conversation was already helping a lot at rebuilding my mental context rapidly, but I decided I could make this process even better.
Windsurf has the concept of “memories” to automatically save details it finds important
During conversation, Cascade can automatically generate and store memories if it encounters context that it believes is useful to remember.
Additionally, you can ask Cascade to create a memory at any time. Just prompt Cascade to “create a memory of …”.
Cascade’s autogenerated memories are associated with the workspace that they were created in and Cascade will retrieve them when it believes that they are relevant. Memories generated in one workspace will not be available in another.
Here are the three memories that Windsurf had created during a recent coding session.
I’m happy with the results of the refactor, not so happy with the testing setup we came up with. I may not come back to this project for several weeks so I need a way to remember those two details when I pick things back up, otherwise I might take longer to stumble across my new non-ideal testing setup.
I wanted to be more explicit and less reliant on Windsurfs “magic” and save these details in a format better suited for my own consumption when I pick the project back up. When American Primeval started to sound more interesting than hacking away on this project I asked cascade to create a Changelog going over everything we had done today.
I now had an incredibly detailed overview of probably 20 or 30 minutes worth of prompting, completely removing the need to store any of that information in my own limited short or longterm memory.
One aspect of this technique that I’ve noticed takes some nudging is documenting what didn’t work. e.g. “we tried this testing library and rejected it for being too pedantic”. I find that the AI tends to focus more on the positive aspects of the session and needs to be reminded to list the areas where we wasted time on something that didn’t work out or where I rejected a suggestion.
In the past I’ve always been hesitant to keep too many side projects in motion. I find that using techniques like the above both at work and at home free up enough extra brain power and energy for me to keep a lot more plates spinning at a time. I’m noticing I’m just as likely to code my own solution to a small problem I’m facing as I am to reach for an existing opensource solution or a commercial tool.