Author Archives: Massed Compute

Best Llama 3 Inference Endpoint – Part 2

Considerations Testing Scenario Startup Commands Token/Sec Results vLLM4xA600014.7 tokens/sec14.7 tokens/sec15.2 tokens/sec15.0 tokens/sec15.0 tokens/secAverage token/sec 14.92 2xH10020.3 tokens/sec20.5 tokens/sec20.3 tokens/sec21.0 tokens/sec20.7 tokens/secAverage token/sec 20.56 Hugging Face TGI4xA600012.38 tokens/sec12.53 tokens/sec12.60 tokens/sec12.55 tokens/sec12.33 tokens/secAverage token/sec 12.48 2xH10021.29 tokens/sec21.40 tokens/sec21.50 tokens/sec21.60 tokens/sec21.41 tokens/secAverage token/sec 21.44 Purely looking at a token/sec result, Hugging Face TGI produces the most tokens/sec on […]

Leverage Hugging Face’s TGI to Create Large Language Models (LLMs) Inference APIs – Part 2

Introduction – Multiple LLM APIs If you haven’t already, go back and read Part 1 of this series. In this guide we take a look at how you can serve multiple models in the same VM. As you start to decide how you want to serve models as an inference endpoint you have a few […]

Leverage Hugging Face’s TGI to Create Large Language Models (LLMs) Inference APIs – Part 1

Introduction Are you interested in setting up an inference endpoint for one of your favorite models? Have you been wanting to leverage the full unquantized version of models but found the process too complex or time-consuming? Do you wish there was a simple and efficient way to deploy full models for your own projects or […]

AutoGen with Ollama/LiteLLM – Setup on Linux VM

In the ever-evolving landscape of AI technology, Microsoft continues to push the boundaries with groundbreaking projects. Among these innovative endeavors is their AutoGen project. AutoGen provides multi-agent conversation framework as a high-level abstraction. With this framework, one can conveniently build LLM workflows. As developers grapple with the increasing complexity of modern software applications, AutoGen offers […]