Show HN: A/B Test Your LLM Prompts in Production

switchport.ai

1 points by rjfc 18 hours ago

I noticed that there are a lot of of LLMOps platforms focused on offline evals, but I couldn’t find anything that manages A/B tests in production and ties different prompts to quantifiable user metrics. For example, being able to test two system prompts and see which one actually improves user success rates or engagement. This might be useful in something like a sales or customer support agent.

So I built a platform that allows you to more easily experiment with different system prompts in production. You can record your own metrics and it will automatically tie this information to whatever experiment treatment the user is in. You can update these experiments and prompts within the UI so you don't have to wait for your next deployment.

It's still pretty early but would love any feedback!