With Ollama running locally, we want to connect it with a Rails app. We will create a new Rails app and use a simple wrapper to make interacting with the Ollama API a little easier:
rails new ai-test
cd ai-test
bundle add ollama-ai
The bin/dev
command uses Foreman to run all the necessary processes for development. We will want update this to add an additional process for Ollama:
# Procfile.dev
web: env RUBY_DEBUG_OPEN=true bin/rails server
js: yarn build --watch
css: yarn build:css --watch
ai: ollama start # this is new
In a new terminal, run bin/dev
to start up the server, including Ollama. Let’s open a Rails console to test we are connected correctly (rails c
):
# localhost:11434 is the default
client = Ollama.new(credentials: { address: "http://localhost:11434" })
puts client.generate({
model: "llama3",
stream: false,
prompt: "What's up doc?"
}).first["response"]
This should connect to the Ollama server and send a one off message (#generate
) and print out the first message’s response.
If you’re following along at home, let me know if you run into any issues. If not, we’ll setup an actual Rails interface for having a conversation with the AI.