Echo Server Tutorial
This tutorial builds a production-quality echo server with connection pooling. We’ll explore acceptors, socket management, and concurrent connection handling.
| Code snippets assume: |
#include <boost/corosio.hpp>
#include <boost/corosio/acceptor.hpp>
#include <boost/capy/task.hpp>
#include <boost/capy/ex/run_async.hpp>
#include <boost/capy/buffers.hpp>
#include <boost/capy/error.hpp>
namespace corosio = boost::corosio;
namespace capy = boost::capy;
Overview
An echo server accepts TCP connections and sends back whatever data clients send. While simple, this pattern demonstrates core concepts:
-
Listening for connections with
acceptor -
Managing multiple concurrent connections
-
Reading and writing data with sockets
-
Handling connection lifecycle
Architecture
Our server uses a worker pool pattern:
-
Preallocate a fixed number of worker structures
-
Each worker holds a socket and buffer
-
The accept loop assigns incoming connections to free workers
-
Each worker runs an independent session coroutine
This avoids allocation during operation and limits resource usage.
Worker Structure
struct worker
{
corosio::socket sock;
std::string buf;
bool in_use = false;
explicit worker(corosio::io_context& ioc)
: sock(ioc)
{
buf.reserve(4096);
}
worker(worker&&) = default;
worker& operator=(worker&&) = default;
};
Each worker owns its socket and buffer. The in_use flag tracks availability.
Session Coroutine
The session coroutine handles one connection:
capy::task<void> run_session(worker& w)
{
w.in_use = true;
for (;;)
{
w.buf.clear();
w.buf.resize(4096);
// Read some data
auto [ec, n] = co_await w.sock.read_some(
capy::mutable_buffer(w.buf.data(), w.buf.size()));
if (ec || n == 0)
break;
w.buf.resize(n);
// Echo it back
auto [wec, wn] = co_await corosio::write(
w.sock, capy::const_buffer(w.buf.data(), w.buf.size()));
if (wec)
break;
}
w.sock.close();
w.in_use = false;
}
Notice:
-
We reuse the worker’s buffer across reads
-
read_some()returns when any data arrives -
corosio::write()writes all data (it’s a composed operation) -
We mark the worker available after the connection closes
Accept Loop
The accept loop assigns connections to free workers:
capy::task<void> accept_loop(
corosio::io_context& ioc,
corosio::acceptor& acc,
std::vector<worker>& workers)
{
for (;;)
{
// Find a free worker
worker* free_worker = nullptr;
for (auto& w : workers)
{
if (!w.in_use)
{
free_worker = &w;
break;
}
}
if (!free_worker)
{
// All workers busy
std::cerr << "All workers busy, waiting...\n";
corosio::socket temp(ioc);
auto [ec] = co_await acc.accept(temp);
if (ec)
break;
temp.close(); // Reject connection
continue;
}
// Accept into the free worker's socket
auto [ec] = co_await acc.accept(free_worker->sock);
if (ec)
{
std::cerr << "Accept error: " << ec.message() << "\n";
break;
}
// Spawn the session coroutine
capy::run_async(ioc.get_executor())(run_session(*free_worker));
}
}
When all workers are busy, we accept and immediately close the connection. A production server might queue connections or implement backpressure.
Main Function
int main(int argc, char* argv[])
{
if (argc != 3)
{
std::cerr << "Usage: echo_server <port> <max-workers>\n";
return 1;
}
auto port = static_cast<std::uint16_t>(std::atoi(argv[1]));
int max_workers = std::atoi(argv[2]);
corosio::io_context ioc;
// Preallocate workers
std::vector<worker> workers;
workers.reserve(max_workers);
for (int i = 0; i < max_workers; ++i)
workers.emplace_back(ioc);
// Create acceptor and listen
corosio::acceptor acc(ioc);
acc.listen(corosio::endpoint(port));
std::cout << "Echo server listening on port " << port
<< " with " << max_workers << " workers\n";
capy::run_async(ioc.get_executor())(accept_loop(ioc, acc, workers));
ioc.run();
}
Key Design Decisions
Why Worker Pooling?
-
Bounded memory: Fixed number of connections
-
No allocation: Sockets and buffers preallocated
-
Simple accounting: Boolean flag tracks usage
Why Composed Write?
The corosio::write() free function ensures all data is sent:
// write_some: may write partial data
auto [ec, n] = co_await sock.write_some(buf); // n might be < buf.size()
// write: writes all data or fails
auto [ec, n] = co_await corosio::write(sock, buf); // n == buf.size() or error
For echo servers, we want complete message delivery.
Why Not Use Exceptions?
The session loop needs to handle EOF gracefully. Using structured bindings:
auto [ec, n] = co_await sock.read_some(buf);
if (ec || n == 0)
break; // Normal termination path
With exceptions, EOF would require a try-catch:
try {
auto n = (co_await sock.read_some(buf)).value();
} catch (...) {
// EOF is an exception here
}
Testing
Start the server:
$ ./echo_server 8080 10
Echo server listening on port 8080 with 10 workers
Connect with netcat:
$ nc localhost 8080
Hello
Hello
World
World
Next Steps
-
HTTP Client — Build an HTTP client
-
Sockets Guide — Deep dive into socket operations
-
Composed Operations — Understanding read/write