How to import setup ollama and make it integrated into HighlightX

Your platform/OS: MacOS Windows Linux

Using Ollama on macOS

1. Download Ollama

Make sure you have Ollama installed and running on your system. If you haven't already, visit the official website Ollama for MacOS

2. Install Ollama

Unzip then install it

3. Configure Ollama for Cross-Origin Access

Due to browser security restrictions, you need to configure cross-origin settings for Ollama to function properly.
- Open Terminal: Open the Terminal application on your system.
- Set Environment Variable: Paste the following command into the terminal and press Enter:
launchctl setenv OLLAMA_ORIGINS "*"

4. Restart Ollama

It's crucial to restart the Ollama service for the changes to take effect.

5. Enter your ollama server endpoint

Go to the AI tab in the settings page and enter your Ollama server address. Look for a blue dot; it will show that HighlightX is connected to Ollama successfully.
If you have installed Ollama on your own computer locally, just type in http://localhost:11434 as the default address.

6. Download AI models

Ollama supports a list of models available on ollama.com/library. To download a AI model
- Open Terminal: Open the Terminal application on your system.
- Download Model Syntax: Download a model by following command:
ollama run <your model name>
- Example Download Llama 3.1 8B model
ollama run ollama run llama3.1

7. Chat with Ollama AI models

The installed models will show at the model selection. Choose a model then you can chat with it.

Using Ollama on Windows

1. Download Ollama

Make sure you have Ollama installed and running on your system. If you haven't already, visit the official website Ollama for Windows

2. Install Ollama

Install it as normal window program

3. Configure Ollama for Cross-Origin Access

Due to browser security restrictions, you need to configure cross-origin settings for Ollama to function properly.
- Search and Open "Edit the system environment variables"
- Click on "Environment variables"
- Click on "New" button to creating a new system variable
- Adding a new new system variable: OLLAMA_ORIGINS
- Click OK/Apply to save.

4. Restart Ollama

It's crucial to restart the Ollama service for the changes to take effect.

5. Enter your ollama server endpoint

Go to the AI tab in the settings page and enter your Ollama server address. Look for a blue dot; it will show that HighlightX is connected to Ollama successfully.
If you have installed Ollama on your own computer locally, just type in http://localhost:11434 as the default address.

6. Download AI models

Ollama supports a list of models available on ollama.com/library. To download a AI model
- Open Terminal: Open the Terminal application on your system.
- Download Model Syntax: Download a model by following command:
ollama run <your model name>
- Example Download Llama 3.1 8B model
ollama run ollama run llama3.1

7. Chat with Ollama AI models

The installed models will show at the model selection. Choose a model then you can chat with it.

Using Ollama on Linux

1. Download Ollama

Make sure you have Ollama installed and running on your system. If you haven't already, visit the official website Ollama for Linux. Or run the command:
curl -fsSL https://ollama.com/install.sh | sh

2. Configure Ollama for Cross-Origin Access

Due to browser security restrictions, you need to configure cross-origin settings for Ollama to function properly.
- Edit the systemd service
sudo systemctl edit ollama.service
- Add Environment under [Service] for each environment variable:
[Service]
Environment="OLLAMA_ORIGINS=*"
- Save and exit.

3. Restart Ollama

It's crucial to restart the Ollama service for the changes to take effect.

4. Enter your ollama server endpoint

Go to the AI tab in the settings page and enter your Ollama server address. Look for a blue dot; it will show that HighlightX is connected to Ollama successfully.
If you have installed Ollama on your own computer locally, just type in http://localhost:11434 as the default address.

5. Download AI models

Ollama supports a list of models available on ollama.com/library. To download a AI model
- Open Terminal: Open the Terminal application on your system.
- Download Model Syntax: Download a model by following command:
ollama run <your model name>
- Example Download Llama 3.1 8B model
ollama run ollama run llama3.1

6. Chat with Ollama AI models

The installed models will show at the model selection. Choose a model then you can chat with it.