๐๏ธ Argilla
Argilla is an open-source data curation platform for LLMs.
๐๏ธ Comet Tracing
There are two ways to trace your LangChains executions with Comet:
๐๏ธ Confident
DeepEval package for unit testing LLMs.
๐๏ธ Context
Context provides user analytics for LLM-powered products and features.
๐๏ธ Fiddler
Fiddler is the pioneer in enterprise Generative and Predictive system ops, offering a unified platform that enables Data Science, MLOps, Risk, Compliance, Analytics, and other LOB teams to monitor, explain, analyze, and improve ML deployments at enterprise scale.
๐๏ธ Infino
Infino is a scalable telemetry store designed for logs, metrics, and traces. Infino can function as a standalone observability solution or as the storage layer in your observability stack.
๐๏ธ Label Studio
Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.
๐๏ธ LLMonitor
LLMonitor is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
๐๏ธ PromptLayer
PromptLayer is a platform for prompt engineering. It also helps with the LLM observability to visualize requests, version prompts, and track usage.
๐ ๏ธ SageMaker Tracking
Amazon SageMaker is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models.
๐๏ธ Streamlit
Streamlit is a faster way to build and share data apps.
๐๏ธ Trubrics
Trubrics is an LLM user analytics platform that lets you collect, analyse and manage user
๐๏ธ Upstash Ratelimit Callback
In this guide, we will go over how to add rate limiting based on number of requests or the number of tokens using UpstashRatelimitHandler. This handler uses ratelimit library of Upstash, which utilizes Upstash Redis.
๐๏ธ uptrain
UpTrain [github || website || docs] is an open-source platform to evaluate and improve LLM applications. It provides grades for 20+ preconfigured checks (covering language, code, embedding use cases), performs root cause analyses on instances of failure cases and provides guidance for resolving them.