An LLM OS is a computational architecture proposed by Andrej Karpathy in a post on X. In contrast to a traditional operating system, it is not deterministic and relies on Large Language Models (LLM) to process instructions.
At the core of the architecture is an LLM inference engine (analogous to the CPU in a computer), which is connected to other LLMs, peripheral devices (video, audio, etc.), the internet, "tools" (calculator, code interpreter, etc.), and a file system (database with vector embeddings).
A system running on an LLM OS is inherently less predictable than a conventional computer. However, it offers greater flexibility, enabling it to better adapt to new tasks and environments, potentially outperforming the capabilities of state-of-the-art systems by orders of magnitude.
The LLM Knowledge Base is a collection of bite-sized explanations for commonly used terms and abbreviations related to Large Language Models and Generative AI.
It's an educational resource that helps you stay up-to-date with the latest developments in AI research and its applications.