Compare and find out which LLM works best for your AI coding cases.
Toggle Cody Context to see how Sourcegraph's Cody Context can improve your answers.
This is an LLM comparison between:
Get the power of Cody Context w/ Claude 3 or GPT-4 inside your editor.
Read 357 lines in 17 files
To create a route in Axum that streams a response, you can use the axum::response::Stream
type. This allows you to return a stream of data as the HTTP response. Here's a basic example of how to do this:
1use axum::{ 2 routing::get, 3 Router, 4 response::IntoResponse, 5}; 6use futures::stream::{self, StreamExt}; 7use http::StatusCode; 8use hyper::Body; 9use std::net::SocketAddr; 10 11async fn stream_response() -> impl IntoResponse { 12 let stream = stream::iter(vec!["Hello", " ", "world!"]) 13 .map(|chunk| Ok::<_, hyper::Error>(chunk.into())); 14 (StatusCode::OK, Body::wrap_stream(stream)) 15} 16 17#[tokio::main] 18async fn main() { 19 // build our application with a single route 20 let app = Router::new().route("/", get(stream_response)); 21 22 // run our app with hyper 23 let addr = SocketAddr::from(([127, 0, 0, 1], 3000)); 24 println!("listening on {}", addr); 25 axum::Server::bind(&addr) 26 .serve(app.into_make_service()) 27 .await 28 .unwrap(); 29}
In this example, we use futures::stream::iter
to create a simple stream from a vector of &str
. Each chunk of the stream is then converted to a Result<_, hyper::Error>
, which is required by Body::wrap_stream
. This stream is then used to create a Body
that is returned as part of the response.
This is a basic example to demonstrate streaming responses. Depending on your use case, you might be streaming data from a file, a database, or another source. The key part is to convert your data into a stream and then use Body::wrap_stream
to create a response body that Axum can send back to the client.
Read 357 lines in 17 files
To create a route that streams a response in Axum, you can use the axum::body::StreamBody
type. Here's an example of how to create a route that streams a response:
1use axum::{ 2 body::StreamBody, 3 response::IntoResponse, 4 routing::get, 5 Router, 6}; 7use futures::stream::{self, Stream}; 8use std::time::Duration; 9 10async fn stream() -> impl IntoResponse { 11 let stream = stream::repeat(b"data: Hello, world!\n\n") 12 .map(Ok) 13 .throttle(Duration::from_secs(1)); 14 15 StreamBody::new(stream) 16} 17 18let app = Router::new().route("/stream", get(stream));
In this example:
We define an async function called stream
that returns an impl IntoResponse
.
Inside the stream
function, we create a Stream
using stream::repeat
that repeatedly yields the bytes b"data: Hello, world!\n\n"
. This creates an infinite stream of the same data.
We use map(Ok)
to wrap each item in the stream with Ok
to convert it into a Result
.
We use throttle
to limit the rate at which items are emitted from the stream. In this case, we throttle the stream to emit items every 1 second using Duration::from_secs(1)
.
We create a StreamBody
using StreamBody::new
and pass the stream to it. StreamBody
implements IntoResponse
, so it can be returned directly from the handler.
Finally, we create a route using get(stream)
and mount it on the /stream
path using Router::new().route("/stream", ...)
.
When a client makes a request to /stream
, the stream
handler will be invoked, and the response will be streamed back to the client. The client will receive the data "data: Hello, world!\n\n"
every 1 second indefinitely.
You can customize the stream by generating your own data and adjusting the throttling duration as needed.
Note that streaming responses are useful for scenarios like server-sent events (SSE), where you want to continuously send data to the client over a long-lived connection.