All posts

How to choose the right LLM for your use case

New LLMs are dropping every other day, each one with their unique strengths and weaknesses

Published on March 31, 2024 by Toni Engelhardt

Unfortunately, there is no universal answer to this question. It depends on your use case, your budget, and your personal preferences. A good strategy to go about it is to do the following:

If you are new to the game, start with the industry standard, GPT-3.5 for simple tasks and GPT-4 Turbo for more complex ones. Use PROMPTMETHEUS to experiment until you get satisfying and reproducible results. If you do not get anywhere with OpenAI models, try different providers like Anthropic, Gemini, or Mistral. Once you have a working prompt, try to optimize it for performance, speed, reliability, and cost by comparing different LLMs and model parameters. As a rule of thumb, the cheapest model which is fast enough and does the job is the best one.

If you are more experienced with Prompt Engineering and AI Development, start with the model that historically worked best for use cases similar to the one you have at hand. But before you go into fine-tuning of your prompt, take an early version and execute it with a few different LLMs to see which one is most promising. Use that one to fine-tune your prompt. Once you achieve great results, revisit your model choice and optimize for performance, speed, and reliability.

Please also check the "Building a Prompt Engineering IDE (VS Code for AI)" and "Prompt Engineering Tips & Tricks" posts for more information.


PROMPTMETHEUS © 2024