Create an Ollama server
Local LLMs and Apple Silicon are an ideal pairing. LLMs need lots of VRAM, and the unified memory of Apple Silicon is perfect for just such a use case.
But what about the other machines on your network? A spare Mac can host AI models for them too. The only problem is… Ollama listens over localhost by default.
That’s a perfectly secure setup if you’re using it locally, but not so great when you want to access it remotely. Let’s fix that!
Ollama installed natively
If you’ve installed Ollama by visiting the app’s download page and running the .pkg
file locally, you can get it to listen on all interfaces by setting the environment variable OLLAMA_HOST
to 0.0.0.0
.
1
launchctl setenv OLLAMA_HOST 0.0.0.0
After that, you’ll need to restart the Ollama app. To do this, click the Ollama icon in the menu bar and select “Quit Ollama,” then re-open the app from your Applications launcher.
Ollama installed via Homebrew
If you’ve opted to install Ollama via Homebrew instead, you should edit the plist file at /opt/homebrew/opt/ollama/homebrew.mxcl.ollama.plist
using your favorite text editor.
1
2
# For example, using nano:
nano /opt/homebrew/opt/ollama/homebrew.mxcl.ollama.plist
Add the following block within the outermost <dict>
tag.
1
2
3
4
5
<key>EnvironmentVariables</key>
<dict>
<key>OLLAMA_HOST</key>
<string>0.0.0.0</string>
</dict>
Then restart Ollama.
1
brew services restart ollama
Conclusion
With these steps, you should now be able to access your Ollama instance remotely. Try combining it with Continue to supercharge your favorite editor. Happy coding!