Supernet
  • Supernet|Guides
  • Multi-Agent Collaboration
    • Overview
    • Best Route
  • Decentralized GPU System
  • Whitepaper
    • Whitepaper of Supernet
  • distributed network
    • Supernet | Introduction
    • AI Operating System
      • Transformer Decoder Architecture
      • LLM OS
    • AI OS Design
      • Decentralize Validator
      • Decentralize Storage
    • TEE-Enhanced Design
    • AVS of Agent
    • Fault Tolerance
  • Product
    • Supernet | Intelligent Node
    • Supernet | AI OS SDK
    • Supernet | AI Explorer
    • Supernet | Agent Studio
    • Supernet | Public API
Powered by GitBook
On this page
  • Core Module
  • Interaction with Other Modules
  1. distributed network
  2. AI Operating System

LLM OS

It has more knowledge than any single human about all OS.

PreviousTransformer Decoder ArchitectureNextAI OS Design

Last updated 5 months ago

Core Module

Located in the center of the figure, it is the core of the entire architecture. The large language model generates natural language responses by processing the input text data.

Interaction with Other Modules

  • RAM (Random Access Memory): It has a two - way arrow connection with the LLM, indicating that the LLM can read data from the RAM and also write data to it. There is a "context window" in the RAM, which may refer to the storage area for the context information that the LLM depends on when processing text.

  • Disk: It is connected to the LLM through the file system. The file system contains "embeddings", which may refer to the data structure used to store word or semantic embedding vectors. The LLM can read data from the disk and also write data to it.

  • Software 1.0 Tools: These tools include "classical computer" tools such as calculators, Python interpreters, and terminals. These tools have a two - way arrow connection with the LLM, indicating that the LLM can interact with these tools to obtain or provide data.

  • CPU (Central Processing Unit): It has a two - way arrow connection with the LLM, indicating that the operation of the LLM depends on the computing power of the CPU.

  • Ethernet: Through Ethernet, the LLM can communicate with other LLMs. This shows that different LLMs can exchange data and collaborate through the network.

  • Browser: It has a two - way arrow connection with the LLM, indicating that the LLM can interact with the browser, probably for obtaining web page content or displaying the generated content in the browser.

  • Peripheral Devices I/O: It includes video and audio devices. These devices have a two - way arrow connection with the LLM, indicating that the LLM can process and generate video and audio content.

The entire architecture shows how the large language model interacts with various components of the OS.