Quickstart

Nosia is a platform that allows you to run an AI model on your own data. It is designed to be easy to install and use.

One command installation

On a macOS, Debian or Ubuntu machine

It will install Docker, Ollama, and Nosia on a macOS, Debian or Ubuntu machine.

curl -fsSL https://raw.githubusercontent.com/nosia-ai/nosia-install/main/nosia-install.sh | sh

You should see the following output:

✅ Setting up environment
✅ Setting up Docker
✅ Setting up Ollama
✅ Starting Ollama
✅ Starting Nosia

You can now access Nosia at https://nosia.localhost

Custom installation

With a remote Ollama

By default, Nosia sets up ollama locally.

To use a remote Ollama instance, set the OLLAMA_BASE_URL environment variable during configuration.

Example:

Replace $OLLAMA_HOST_IP with the FQDN or IP address of your Ollama host and run:

curl -fsSL https://raw.githubusercontent.com/nosia-ai/nosia-install/main/nosia-install.sh \
  | OLLAMA_BASE_URL=http://$OLLAMA_HOST_IP:11434 sh

With a custom completion model

By default, Nosia uses:

  1. Completion model: qwen2.5
  2. Embeddings model: nomic-embed-text
  3. Checking model: bespoke-minicheck

You can use any completion model available on Ollama by setting the LLM_MODEL environment variable during the installation.

Example:

To use the mistral model, run:

curl -fsSL https://raw.githubusercontent.com/nosia-ai/nosia-install/main/nosia-install.sh \
  | LLM_MODEL=mistral sh

With a custom embeddings model

At this time, the nomic-embed-text embeddings model is required for Nosia to work.

Advanced installation

On a macOS with a Debian or Ubuntu VM

On macOS, install Homebrew:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Then install Ollama with Homebrew:

Replace $OLLAMA_HOST_IP with the IP address of the Ollama host machine and run the following command:

brew install ollama
ollama pull qwen2.5
ollama pull bespoke-minicheck
ollama pull nomic-embed-text
OLLAMA_BASE_URL=$OLLAMA_HOST_IP:11434 OLLAMA_MAX_LOADED_MODELS=3 ollama serve

On the Debian/Ubuntu VM:

Replace $OLLAMA_HOST_IP with the IP address of the host machine and run the following command:

curl -fsSL https://raw.githubusercontent.com/nosia-ai/nosia-install/main/nosia-install.sh \
  | OLLAMA_BASE_URL=http://$OLLAMA_HOST_IP:11434 sh

You should see the following output:

✅ Setting up environment
✅ Setting up Docker
✅ Setting up Ollama
✅ Starting Ollama
✅ Starting Nosia

From the VM, you can access Nosia at https://nosia.localhost

If you want to access Nosia from the host machine, you may need to forward the port from the VM to the host machine.

Replace $USER with the username of the VM, $VM_IP with the IP address of the VM, and $LOCAL_PORT with the port you want to use on the host machine, 8443 for example, and run the following command:

ssh $USER@$VM_IP -L $LOCAL_PORT:localhost:443

After running the command, you can access Nosia at https://nosia.localhost:$LOCAL_PORT.

Upgrade

You can upgrade the services with the following command:

./script/upgrade

Start

You can start the services with the following command:

./script/start

Stop

You can stop the services with the following command:

./script/stop

Troubleshooting

If you encounter any issue:

  • during the installation, you can check the logs at ./log/production.log
  • during the use waiting for an AI response, you can check the jobs at http://<IP>:3000/jobs
  • with Nosia, you can check the logs with docker compose -f ./docker-compose.yml logs -f
  • with the Ollama server, you can check the logs at ~/.ollama/logs/server.log

If you need further assistance, please open an issue!

Was this page helpful?