Featured Posts

ITE in AIE: IT Companies and Engineers in AI Era

IT Companies Change or die: Apps? AI is doing it all R&D ? Companies must have R&D department to do what AI is not doing yet. Softwa...

26 January, 2026

Some GPU Considerations Good to Use with Local LLM Models

🌟Best choice: RTX 3090

  • 2 generations before -> low cost
  • 24 GB VRAM -> top power for LLM (VRAM thirsty)

🧠 NVIDIA Consumer GPUs (≥ 8 GB VRAM, sorted by VRAM)

24 GB VRAM

  • RTX 4090
    RTX 3090
    RTX 3090 Ti

πŸ‘‰ Absolute best for local LLMs (big context, no offload).

20 GB VRAM

  • RTX 4080 Super (20 GB variants exist in some regions / AIBs)
    RTX 3080 Ti (20 GB OEM / rare variants)

πŸ‘‰ Rare, but very strong if you find one.

16 GB VRAM

  • RTX 4080
    RTX 4070 Ti Super
    RTX 4060 Ti 16 GB
    RTX 3080 16 GB (rare AIBs)
    RTX 2080 Ti

πŸ‘‰ Sweet spot for serious local LLM + agents.

No comments:

Post a Comment