How to use LM Studio in BoltAI

Written By Daniel Nguyen

Last updated 3 months ago

LM Studio is a powerful, user-friendly desktop application (for Windows, macOS, and Linux) that allows users to discover, download, and run various large language models (LLMs) on their own local computer, entirely offline.

Follow these quick steps to run local models from LM Studio inside BoltAI.

Install & enable LM Studio server

  1. Open LM Studio and switch to “Power User” mode.

  2. Open Developer tab

  3. Toggle Enable local server on.

  4. (optional) Copy the server address if you customize the server port

Load your favorite AI model(s)

  1. Click “Select a model to load” (or keyboard shortcut L)

  2. Select your model in the list

  3. Click “Load Model”.

LM Studio should show a “READY” badge.

Add LM Studio in BoltAI

LM Studio server is now ready. Let’s configure it in BoltAI.

  1. Open BoltAI → Settings → AI Services.

  2. Click Add → choose LM StudioNext.

  3. Set a Default ModelAdd Service.

Start a new chat with LM Studio models

  1. Click New Chat.

  2. Choose the LM Studio service/model in the model switcher and start your conversation.

Quick checks (if it doesn’t connect)

  • LM Studio is running and Enable local server is ON

  • The Base URL/port in BoltAI matches what LM Studio shows (Developer tab).

  • The model you picked is downloaded and loaded in LM Studio.