-
Notifications
You must be signed in to change notification settings - Fork 133
IO Service
The tacopie IO Service
is the service that operates IO Handling.
It polls sockets for input and output, processes read and write operations and calls the appropriate callbacks.
The IO Service is defined as tacopie::network::io_service
.
For most use cases, you do not have to worry about the IO Service.
Therefore, a default global instance is provided and used by all your tcp_client
and tcp_server
.
tacopie::network::io_service
defines the following static function:
const std::shared_ptr<io_service>& tacopie::network::get_default_io_service(std::uint32_t num_io_workers = 1);
Therefore, you can access the global io_service instance from anywhere by calling:
tacopie::network::io_service::get_default_io_service();
When calling this function for the first time, the io service is created. Subsequent calls simply return the instance.
The num_io_workers
parameter defines the number of workers used by the io_service. Workers are described later on this page.
On first call, num_io_workers
is used to spawn the exact number of workers. On subsequent calls, if this number changes, the number of io workers is adjusted (though it should always remain strictly positive).
tacopie::network::io_service
defines the following static function:
void tacopie::network::set_default_io_service(const std::shared_ptr<io_service>& service);
Therefore, you can set the global io_service instance from anywhere by calling:
tacopie::network::io_service::set_global_instance(some_instance);
This instance accepts nullptr
.
This can be useful if you are willing to make sure that the default io_service instance is destroyed.
A typical use is if you need to fork in the middle of your program: forking requires to clean working threads to avoid issues in the child process. Then, setting the global instance to nullptr
and destroying all objects using the io_service will make sure all background threads are joined.
However, even though there is a default global instance, it does not mean that the io_service is a singleton.
You can create as many io_service as you want and assign specific io_service to your tcp_client
and tcp_server
instances.
This can be useful in the case you want dedicated io_service for your clients or if you want different io_service configuration on different clients.
The io_service is designed as follow:
- 1 thread is in charge of the event loop
- num_io_workers threads are in charge of processing read and writes callbacks
Basically, when the io_service is started, the event loop thread is spawned and will start poll the underlying socket file descriptors.
Similarly, num_io_workers threads are spawned inside a thread pool and wait to receive some tasks to process.
When the event loop thread detects a new read (or write) event, it will push a new task to be completed by the io workers. The io workers will then process this task and execute all underlying callbacks within them.
This means the following things:
- the io_service requires at least 2 threads to work: 1 event loop thread and at least one io worker.
- the number of io workers is flexible: it can be shrunk or increased at runtime.
- you should not execute blocking or very long tasks in the read and write callbacks, especially if you have very few io workers. Execute short tasks or execute long tasks in your own background thread.
Need more information? Contact me.