AppleInsider
Machine learning researchers using Ollama will enjoy a speed boost to LLM processing, as the open-source tool now uses MLX on Apple Silicon to fully take advantage of unified memory. Ollama has been boosted by MLX on Apple Silicon Anyone working with large language models (LLMs) wants results as quickly as possible. There are techniques to do this using multiple Macs , working in a cluster to increase the amount of processing at hand, but one method made by Apple also provides an extra bit of assistance. This has been undertaken by the developers working on the open-source model management and execution tool Ollama . In a March 30 update, it announced that it is previewing a version of the tool for Apple Silicon that takes advantage of MLX . Continue Reading on AppleInsider | Discuss on our Forums
Go to News Site