llama.cpp with the 30B param LLM on my laptop Testing llama.cpp with the 30B param LLM on my laptop. It can write code, but be prepared for a wait – the following snippet took over 20 minutes to generate. View on X → Share this: Share on X (Opens in new window) X Share on LinkedIn (Opens in new window) LinkedIn Share on Reddit (Opens in new window) Reddit Share on Facebook (Opens in new window) Facebook More Share on WhatsApp (Opens in new window) WhatsApp Share on Telegram (Opens in new window) Telegram Share on X (Opens in new window) X By Martin | April 3, 2023 | X post | ← HTTP error code 418 I’m a teapot Our DYI air-to-water heat pump →