summaryrefslogtreecommitdiff
path: root/libs/libcurl/src/README.multi_socket
diff options
context:
space:
mode:
Diffstat (limited to 'libs/libcurl/src/README.multi_socket')
-rw-r--r--libs/libcurl/src/README.multi_socket53
1 files changed, 53 insertions, 0 deletions
diff --git a/libs/libcurl/src/README.multi_socket b/libs/libcurl/src/README.multi_socket
new file mode 100644
index 0000000000..d91e1d9f27
--- /dev/null
+++ b/libs/libcurl/src/README.multi_socket
@@ -0,0 +1,53 @@
+Implementation of the curl_multi_socket API
+
+ The main ideas of the new API are simply:
+
+ 1 - The application can use whatever event system it likes as it gets info
+ from libcurl about what file descriptors libcurl waits for what action
+ on. (The previous API returns fd_sets which is very select()-centric).
+
+ 2 - When the application discovers action on a single socket, it calls
+ libcurl and informs that there was action on this particular socket and
+ libcurl can then act on that socket/transfer only and not care about
+ any other transfers. (The previous API always had to scan through all
+ the existing transfers.)
+
+ The idea is that curl_multi_socket_action() calls a given callback with
+ information about what socket to wait for what action on, and the callback
+ only gets called if the status of that socket has changed.
+
+ We also added a timer callback that makes libcurl call the application when
+ the timeout value changes, and you set that with curl_multi_setopt() and the
+ CURLMOPT_TIMERFUNCTION option. To get this to work, Internally, there's an
+ added a struct to each easy handle in which we store an "expire time" (if
+ any). The structs are then "splay sorted" so that we can add and remove
+ times from the linked list and yet somewhat swiftly figure out both how long
+ time there is until the next nearest timer expires and which timer (handle)
+ we should take care of now. Of course, the upside of all this is that we get
+ a curl_multi_timeout() that should also work with old-style applications
+ that use curl_multi_perform().
+
+ We created an internal "socket to easy handles" hash table that given
+ a socket (file descriptor) return the easy handle that waits for action on
+ that socket. This hash is made using the already existing hash code
+ (previously only used for the DNS cache).
+
+ To make libcurl able to report plain sockets in the socket callback, we had
+ to re-organize the internals of the curl_multi_fdset() etc so that the
+ conversion from sockets to fd_sets for that function is only done in the
+ last step before the data is returned. I also had to extend c-ares to get a
+ function that can return plain sockets, as that library too returned only
+ fd_sets and that is no longer good enough. The changes done to c-ares are
+ available in c-ares 1.3.1 and later.
+
+ We have done a test runs with up to 9000 connections (with a single active
+ one). The curl_multi_socket_action() invoke then takes less than 10
+ microseconds in average (using the read-only-1-byte-at-a-time hack). We are
+ now below the 60 microseconds "per socket action" goal (the extra 50 is the
+ time libevent needs).
+
+Documentation
+
+ http://curl.haxx.se/libcurl/c/curl_multi_socket_action.html
+ http://curl.haxx.se/libcurl/c/curl_multi_timeout.html
+ http://curl.haxx.se/libcurl/c/curl_multi_setopt.html