Gemma 4 Benchmarks, iMac G3 Local LLM, and Ollama Android Client for On-Device Inference
Gemma 4 Benchmarks, iMac G3 Local LLM, and Ollama Android Client for On-Device Inference Today's Highlights This week features impressive benchmarks for the new Gemma 4, highlighting its potential ...

Source: DEV Community
Gemma 4 Benchmarks, iMac G3 Local LLM, and Ollama Android Client for On-Device Inference Today's Highlights This week features impressive benchmarks for the new Gemma 4, highlighting its potential for local inference, alongside an incredible feat of running an LLM on a 1998 iMac G3. Additionally, a new native Android client for Ollama allows seamless interaction with self-hosted models from mobile devices. I technically got an LLM running locally on a 1998 iMac G3 with 32 MB of RAM (r/LocalLLaMA) Source: https://reddit.com/r/LocalLLaMA/comments/1sdnw7l/i_technically_got_an_llm_running_locally_on_a/ An astonishing demonstration of extreme local inference has surfaced, showcasing an LLM running on a vintage 1998 iMac G3. This machine, equipped with a 233 MHz PowerPC 750 processor and a mere 32 MB of RAM, successfully loaded and ran Andrej Karpathy’s 260K TinyStories model. The TinyStories model, based on a Llama 2 architecture and featuring a checkpoint size of approximately 1 MB, repres