LM Studio
LM Studio is just like Ollama a great platform to run LLMs locally.
By default LMStudio does not run the local server and you need to enable it explicitly (‘Local Server’ in the menu). The default port is 1234 (unlike Ollama’s 11434), so when calling Knwler:
uv run main.py -f https://knwler.com/pdfs/mbti.pdf --backend lmstudioYou can optionally specify the base url
uv run main.py -f https://knwler.com/pdfs/mbti.pdf --backend lmstudio --base-url http://localhost:1234/ap/v1