Public tidbits and discoveries that aren't worth a full blog post but that I still wanted to write about. No guarantees of accuracy, utility, sanity, or anything else provided. Also cross-posted to nik.tw.
All Notes
Advanced Logseq queries can be useful
Logseq has a nifty feature where if you use their time tracking primitives it’ll automatically use a query under the hood to show you the tasks that you’re currently working on (ie blocks that begin with NOW). I vaguely knew that under the hood it was using a Logseq advanced query, but hadn’t previously looked into how they actually worked, so today I decided to rectify that.
How to use Deepseek R1 and o1 in VS Code
This is definitely in the “experimental” phase and I’m not using it regularly yet, but if you want a Windsurf and/or Cursor like experience but using o1 or Deepseek R1 here’s a quick way to get an 80/20 to get it going
How to watch Deepseek R1 think
tl;dr: use together.ai with Open Web UI (formerly known as the Ollama Web UI)
Deepseek R1 came out recently, and it’s the first open source reasoning model that is in the same league as OpenAI’s o1 model. One of the coolest things about the new model is that you can see the reasoning tokens it uses to do its chain-of-thought before it answers you at the beginning of the output (the LLM simply replies with its thoughts surrounded by <think> and </think> tags.).
Make any webpage a standalone app (even if it’s not a PWA)
Sometimes it’s nice to be able to use webapps as first-class desktop apps rather than inside a browser tab.
This is easy enough to accomplish for websites that offer PWAs, but for those that don’t I usually use a combination of these two tools to get a more “app like” experience:
You can just pull GGUFs from ollama
So if there’s a new LLM available that’s not yet been published to the main ollama hub (https://ollama.com/library) but does have a GGUF available on huggingface it’s pretty easy to get it running in ollama without having to use llama.cpp or anything like that.