Skip to content

Commit

Permalink
Expanded Python instructions for locally running language models
Browse files Browse the repository at this point in the history
  • Loading branch information
KillianLucas committed Dec 4, 2023
1 parent 4f0cebb commit 96ab4aa
Showing 1 changed file with 17 additions and 13 deletions.
30 changes: 17 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,19 +155,6 @@ In Python, Open Interpreter remembers conversation history. If you want to start
interpreter.reset()
```

### Local chat

```python
import interpreter

interpreter.local = True #does nothing
interpreter.model = "openai/model" #Use openai format, model doesn't matter
interpreter.api_key = "fake_key" #Just needs to have something, doesn't matter
interpreter.api_base = "http://localhost:1234/v1" #Change to whatever host and port you need.

interpreter.chat()
```

### Save and Restore Chats

`interpreter.chat()` returns a List of messages, which can be used to resume a conversation with `interpreter.messages = messages`:
Expand Down Expand Up @@ -212,6 +199,8 @@ interpreter.model = "gpt-3.5-turbo"

### Running Open Interpreter locally

#### Terminal

Open Interpreter uses [LM Studio](https://lmstudio.ai/) to connect to local language models (experimental).

Simply run `interpreter` in local mode from the command line:
Expand All @@ -233,6 +222,21 @@ Once the server is running, you can begin your conversation with Open Interprete

> **Note:** Local mode sets your `context_window` to 3000, and your `max_tokens` to 1000. If your model has different requirements, set these parameters manually (see below).
#### Python

Our Python package gives you more control over each setting. To replicate `--local` and connect to LM Studio, use these settings:

```python
import interpreter

interpreter.local = True # Disables online features like Open Procedures
interpreter.model = "openai/x" # Tells OI to send messages in OpenAI's format
interpreter.api_key = "fake_key" # LiteLLM, which we use to talk to LM Studio, requires this
interpreter.api_base = "http://localhost:1234/v1" # Point this at any OpenAI compatible server

interpreter.chat()
```

#### Context Window, Max Tokens

You can modify the `max_tokens` and `context_window` (in tokens) of locally running models.
Expand Down

0 comments on commit 96ab4aa

Please sign in to comment.