What's Your Favorite Open Source Single & Multimodal Large Language Model? (Also Called Local LLMs)

0 replies
I've been focusing on open source variants so as to eliminate dependency on OpenAI, Google and other premium LLM API providers; and
That's because I want to mitigate risks due to rising API costs, server downtimes (resulting to additional API costs), no control over future API pricing, and user privacy / data security issues ...

My favorites include Vicuna (7-billion to 34B parameter variants) and Mistral (fine-tuned on OpenOrca dataset, from 7B to 10B variants); and
For multimodal, it's LLaVA 1.6 (7 to 34B variants for LLM with visual analytics capabilities), Bark (for neural text-to-speech with emotions) and Meta's MAGNeT (for sound effects and background music generation) ...

Meanwhile, I use LLaMA.CPP (both for developing server APIs in the GPU machines of my business, and using the LLaMA.CPP client tool for my work machines); and
With LLaMA.CPP, you can run these open source LLMs even without NVIDIA / AMD GPUs (though it'll be much faster, even with a commercial / non-industrial GPU), as you can also build server and CLI C++ binaries with optimization support for CPU / Metal and system RAM ...

What's yours?
#called #favorite #gpu-optimized llms #language #large #llms #local #local llms #model #multimodal #multimodal llms #open #open source llms #single #source
Avatar of Unregistered

Trending Topics