AI Engineering

Image of Author
September 20, 2024 (last updated September 21, 2024)

This note is for practical engineering notes on working with AI. For conceptual and philosophical notes see AI.

Local AI

Python and HuggingFace

See my note on Python, particularly Python#Machine setup and then visit various Hugging Face models / transformers and try to run them locally

Elixir and Bumblebee and LiveBook

Elixir LiveBook has a way to run the backend locally. You can then easily create "smart cells" that will do a bunch of the heavy lifting for you.

ollama

https://ollama.com/

An amazingly simple and effective CLI for working with local LLMs.

ollama run works also as an ollama attach, meaning it will attach to the current background process running a particular model instead of creating a new one (which would eventually overwhelm the RAM on your machine).

LocalAI

https://localai.io/

I'm still experimenting with this