Intelligent gateway for
Solana agents
Turn simple APIs into powerful AI agents. Sweep sits between your apps and AI services, making everything work together seamlessly.
Get Started >Turn simple APIs into powerful AI agents. Sweep sits between your apps and AI services, making everything work together seamlessly.
Get Started >Sweep is built on (and by some of the core contributors of) Envoy proxy with the belief that:
Prompts are complex and unclear user requests that require the same features as traditional HTTP requests, such as secure handling, smart routing, strong observability, and seamless integration with backend (API) systems for personalization; all separate from business logic.
Sweep operates as an independent process running alongside your application, providing features like HTTP connection management, filtering, and routing.
It is compatible with a wide range of Python, Java, Go, Node.js, PHP, Ruby, and more.
Sweep is engineered with specialized (sub-billion) LLMs that are designed for fast, cost-effective, and accurate handling of prompts. These LLMs are best-in-class for critical prompt-related tasks.
Sweep offers capabilities for LLM calls originating from your application, including a suite of features to ensure prompts and model calls are across from upstream LLMs. Our automatic response to some LLM failures provides enhanced availability and disaster recovery coverage.
Built-in retry logic allows developers to manage multiple connections to LLMs so that you can keep your application running.
This is substantial benefit in using the same software at the edge (observability, traffic shaping, authentication, rate limiting, etc.) as for outbound LLM inference use cases.
Sweep is strategically able to act as a front/edge gateway for all positions. This includes TLS termination, rate limiting, and prompt-level routing.