Discussion
Loading...

#Tag

  • About
  • Code of conduct
  • Privacy
  • About Bonfire
Apple-feed
Apple-feed boosted
Rost Glukhov
@ros@techhub.social  ·  activity timestamp 4 days ago

NVIDIA DGX Spark vs Mac Studio vs RTX-4080: Ollama Performance Comparison:
https://www.glukhov.org/post/2025/10/dgx-spark-vs-mac-studio-vs-rtx4080/
#ollama #nvidia #dgx #dgxspark #performance #devops #mac #macstudio #llm #gpt

Rost Glukhov | Personal site and technical blog

NVIDIA DGX Spark vs Mac Studio vs RTX-4080: Ollama Performance Comparison

Real-world Ollama performance comparison running GPT-OSS 120b (a 117B parameter MoE model with 5.1B active parameters) across NVIDIA DGX Spark, Mac Studio, and RTX 4080. Detailed benchmarks show prompt evaluation, token generation rates, and CPU/GPU offloading behavior for this 65GB model.
  • Copy link
  • Flag this post
  • Block
Rost Glukhov
@ros@techhub.social  ·  activity timestamp 4 days ago

NVIDIA DGX Spark vs Mac Studio vs RTX-4080: Ollama Performance Comparison:
https://www.glukhov.org/post/2025/10/dgx-spark-vs-mac-studio-vs-rtx4080/
#ollama #nvidia #dgx #dgxspark #performance #devops #mac #macstudio #llm #gpt

Rost Glukhov | Personal site and technical blog

NVIDIA DGX Spark vs Mac Studio vs RTX-4080: Ollama Performance Comparison

Real-world Ollama performance comparison running GPT-OSS 120b (a 117B parameter MoE model with 5.1B active parameters) across NVIDIA DGX Spark, Mac Studio, and RTX 4080. Detailed benchmarks show prompt evaluation, token generation rates, and CPU/GPU offloading behavior for this 65GB model.
  • Copy link
  • Flag this post
  • Block
#selfhosting
#selfhosting boosted
Rost Glukhov
@ros@techhub.social  ·  activity timestamp 7 days ago

Docker Model Runner cheatsheet with commands, examples, and best practices. Learn docker model pull, run, package, and configuration options for deploying AI models locally with Docker's official LLM tool:
https://www.glukhov.org/post/2025/10/docker-model-runner-cheatsheet/
#cheetsheet #llm #devops #selfhosting #ai #ollama

Rost Glukhov | Personal site and technical blog

Docker Model Runner Cheatsheet: Commands & Examples

Complete Docker Model Runner cheatsheet with commands, examples, and best practices. Learn docker model pull, run, package, and configuration options for deploying AI models locally with Docker's official LLM tool.
  • Copy link
  • Flag this post
  • Block
Rost Glukhov
@ros@techhub.social  ·  activity timestamp 7 days ago

Docker Model Runner cheatsheet with commands, examples, and best practices. Learn docker model pull, run, package, and configuration options for deploying AI models locally with Docker's official LLM tool:
https://www.glukhov.org/post/2025/10/docker-model-runner-cheatsheet/
#cheetsheet #llm #devops #selfhosting #ai #ollama

Rost Glukhov | Personal site and technical blog

Docker Model Runner Cheatsheet: Commands & Examples

Complete Docker Model Runner cheatsheet with commands, examples, and best practices. Learn docker model pull, run, package, and configuration options for deploying AI models locally with Docker's official LLM tool.
  • Copy link
  • Flag this post
  • Block
#selfhosting
#selfhosting boosted
Rost Glukhov
@ros@techhub.social  ·  activity timestamp last week

Compare Docker's new Model Runner with Ollama for local LLM deployment.
Detailed analysis of performance, ease of use, GPU support, API compatibility, and when to choose each solution for the AI workflow in 2025:
https://www.glukhov.org/post/2025/10/docker-model-runner-vs-ollama-comparison/
#llm #devops #selfhosting #ai #docker #ollama

Rost Glukhov | Personal site and technical blog

Docker Model Runner vs Ollama: Which to Choose?

Compare Docker's new Model Runner with Ollama for local LLM deployment. Detailed analysis of performance, ease of use, GPU support, API compatibility, and when to choose each solution for your AI workflow in 2025.
  • Copy link
  • Flag this post
  • Block
Rost Glukhov
@ros@techhub.social  ·  activity timestamp last week

Compare Docker's new Model Runner with Ollama for local LLM deployment.
Detailed analysis of performance, ease of use, GPU support, API compatibility, and when to choose each solution for the AI workflow in 2025:
https://www.glukhov.org/post/2025/10/docker-model-runner-vs-ollama-comparison/
#llm #devops #selfhosting #ai #docker #ollama

Rost Glukhov | Personal site and technical blog

Docker Model Runner vs Ollama: Which to Choose?

Compare Docker's new Model Runner with Ollama for local LLM deployment. Detailed analysis of performance, ease of use, GPU support, API compatibility, and when to choose each solution for your AI workflow in 2025.
  • Copy link
  • Flag this post
  • Block
Log in

Encryptr.net Social

This is a forward thinking server running the Bonfire social media platform.

LGBTQA+ and BPOC friendly.

Encryptr.net Social: About · Code of conduct · Privacy ·
Encryptr.net social · 1.0.0-rc.2.33 no JS en
Automatic federation enabled
  • Explore
  • About
  • Code of Conduct
Home
Login