summaryrefslogtreecommitdiff
path: root/plugins/FTPFileYM/curl/docs/TODO
diff options
context:
space:
mode:
Diffstat (limited to 'plugins/FTPFileYM/curl/docs/TODO')
-rw-r--r--plugins/FTPFileYM/curl/docs/TODO700
1 files changed, 0 insertions, 700 deletions
diff --git a/plugins/FTPFileYM/curl/docs/TODO b/plugins/FTPFileYM/curl/docs/TODO
deleted file mode 100644
index 8b133dc160..0000000000
--- a/plugins/FTPFileYM/curl/docs/TODO
+++ /dev/null
@@ -1,700 +0,0 @@
- _ _ ____ _
- ___| | | | _ \| |
- / __| | | | |_) | |
- | (__| |_| | _ <| |___
- \___|\___/|_| \_\_____|
-
- Things that could be nice to do in the future
-
- Things to do in project cURL. Please tell us what you think, contribute and
- send us patches that improve things!
-
- All bugs documented in the KNOWN_BUGS document are subject for fixing!
-
- 1. libcurl
- 1.2 More data sharing
- 1.3 struct lifreq
- 1.4 signal-based resolver timeouts
- 1.5 get rid of PATH_MAX
- 1.6 Happy Eyeball dual stack connect
- 1.7 Modified buffer size approach
-
- 2. libcurl - multi interface
- 2.1 More non-blocking
- 2.2 Fix HTTP Pipelining for PUT
-
- 3. Documentation
- 3.1 More and better
-
- 4. FTP
- 4.1 HOST
- 4.2 Alter passive/active on failure and retry
- 4.3 Earlier bad letter detection
- 4.4 REST for large files
- 4.5 FTP proxy support
- 4.6 ASCII support
-
- 5. HTTP
- 5.1 Better persistency for HTTP 1.0
- 5.2 support FF3 sqlite cookie files
- 5.3 Rearrange request header order
- 5.4 HTTP2/SPDY
-
- 6. TELNET
- 6.1 ditch stdin
- 6.2 ditch telnet-specific select
- 6.3 feature negotiation debug data
- 6.4 send data in chunks
-
- 7. SMTP
- 7.1 Pipelining
- 7.2 Graceful base64 decoding failure
- 7.3 Enhanced capability support
-
- 8. POP3
- 8.1 Pipelining
- 8.2 Graceful base64 decoding failure
- 8.3 Enhanced capability support
-
- 9. IMAP
- 9.1 Graceful base64 decoding failure
- 9.2 Enhanced capability support
-
- 10. LDAP
- 10.1 SASL based authentication mechanisms
-
- 11. New protocols
- 11.1 RSYNC
-
- 12. SSL
- 12.1 Disable specific versions
- 12.2 Provide mutex locking API
- 12.3 Evaluate SSL patches
- 12.4 Cache OpenSSL contexts
- 12.5 Export session ids
- 12.6 Provide callback for cert verification
- 12.7 Support other SSL libraries
- 12.8 improve configure --with-ssl
- 12.9 Support DANE
-
- 13. GnuTLS
- 13.1 SSL engine stuff
- 13.2 check connection
-
- 14. SASL
- 14.1 Other authentication mechanisms
-
- 15. Client
- 15.1 sync
- 15.2 glob posts
- 15.3 prevent file overwriting
- 15.4 simultaneous parallel transfers
- 15.5 provide formpost headers
- 15.6 url-specific options
- 15.7 warning when setting an option
- 15.8 IPv6 addresses with globbing
-
- 16. Build
- 16.1 roffit
-
- 17. Test suite
- 17.1 SSL tunnel
- 17.2 nicer lacking perl message
- 17.3 more protocols supported
- 17.4 more platforms supported
-
- 18. Next SONAME bump
- 18.1 http-style HEAD output for ftp
- 18.2 combine error codes
- 18.3 extend CURLOPT_SOCKOPTFUNCTION prototype
-
- 19. Next major release
- 19.1 cleanup return codes
- 19.2 remove obsolete defines
- 19.3 size_t
- 19.4 remove several functions
- 19.5 remove CURLOPT_FAILONERROR
- 19.6 remove CURLOPT_DNS_USE_GLOBAL_CACHE
- 19.7 remove progress meter from libcurl
- 19.8 remove 'curl_httppost' from public
- 19.9 have form functions use CURL handle argument
- 19.10 Add CURLOPT_MAIL_CLIENT option
-
-==============================================================================
-
-1. libcurl
-
-1.2 More data sharing
-
- curl_share_* functions already exist and work, and they can be extended to
- share more. For example, enable sharing of the ares channel and the
- connection cache.
-
-1.3 struct lifreq
-
- Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
- SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
- To support ipv6 interface addresses for network interfaces properly.
-
-1.4 signal-based resolver timeouts
-
- libcurl built without an asynchronous resolver library uses alarm() to time
- out DNS lookups. When a timeout occurs, this causes libcurl to jump from the
- signal handler back into the library with a sigsetjmp, which effectively
- causes libcurl to continue running within the signal handler. This is
- non-portable and could cause problems on some platforms. A discussion on the
- problem is available at http://curl.haxx.se/mail/lib-2008-09/0197.html
-
- Also, alarm() provides timeout resolution only to the nearest second. alarm
- ought to be replaced by setitimer on systems that support it.
-
-1.5 get rid of PATH_MAX
-
- Having code use and rely on PATH_MAX is not nice:
- http://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html
-
- Currently the SSH based code uses it a bit, but to remove PATH_MAX from there
- we need libssh2 to properly tell us when we pass in a too small buffer and
- its current API (as of libssh2 1.2.7) doesn't.
-
-1.6 Happy Eyeball dual stack connect
-
- In order to make alternative technologies not suffer when transitioning, like
- when introducing IPv6 as an alternative to IPv4 and there are more than one
- option existing simultaneously there are reasons to reconsider internal
- choices.
-
- To make libcurl do blazing fast IPv6 in a dual-stack configuration, this needs
- to be addressed:
-
- http://tools.ietf.org/html/rfc6555
-
-1.7 Modified buffer size approach
-
- Current libcurl allocates a fixed 16K size buffer for download and an
- additional 16K for upload. They are always unconditionally part of the easy
- handle. If CRLF translations are requested, an additional 32K "scratch
- buffer" is allocated. A total of 64K transfer buffers in the worst case.
-
- First, while the handles are not actually in use these buffers could be freed
- so that lingering handles just kept in queues or whatever waste less memory.
-
- Secondly, SFTP is a protocol that needs to handle many ~30K blocks at once
- since each need to be individually acked and therefore libssh2 must be
- allowed to send (or receive) many separate ones in parallel to achieve high
- transfer speeds. A current libcurl build with a 16K buffer makes that
- impossible, but one with a 512K buffer will reach MUCH faster transfers. But
- allocating 512K unconditionally for all buffers just in case they would like
- to do fast SFTP transfers at some point is not a good solution either.
-
- Dynamically allocate buffer size depending on protocol in use in combination
- with freeing it after each individual transfer? Other suggestions?
-
-
-2. libcurl - multi interface
-
-2.1 More non-blocking
-
- Make sure we don't ever loop because of non-blocking sockets returning
- EWOULDBLOCK or similar. Blocking cases include:
-
- - Name resolves on non-windows unless c-ares is used
- - NSS SSL connections
- - HTTP proxy CONNECT operations
- - SOCKS proxy handshakes
- - file:// transfers
- - TELNET transfers
- - The "DONE" operation (post transfer protocol-specific actions) for the
- protocols SFTP, SMTP, FTP. Fixing Curl_done() for this is a worthy task.
-
-2.2 Fix HTTP Pipelining for PUT
-
- HTTP Pipelining can be a way to greatly enhance performance for multiple
- serial requests and currently libcurl only supports that for HEAD and GET
- requests but it should also be possible for PUT.
-
-3. Documentation
-
-3.1 More and better
-
- Exactly
-
-4. FTP
-
-4.1 HOST
-
- HOST is a suggested command in the works for a client to tell which host name
- to use, to offer FTP servers named-based virtual hosting:
-
- http://tools.ietf.org/html/draft-hethmon-mcmurray-ftp-hosts-11
-
-4.2 Alter passive/active on failure and retry
-
- When trying to connect passively to a server which only supports active
- connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the
- connection. There could be a way to fallback to an active connection (and
- vice versa). http://curl.haxx.se/bug/feature.cgi?id=1754793
-
-4.3 Earlier bad letter detection
-
- Make the detection of (bad) %0d and %0a codes in FTP url parts earlier in the
- process to avoid doing a resolve and connect in vain.
-
-4.4 REST for large files
-
- REST fix for servers not behaving well on >2GB requests. This should fail if
- the server doesn't set the pointer to the requested index. The tricky
- (impossible?) part is to figure out if the server did the right thing or not.
-
-4.5 FTP proxy support
-
- Support the most common FTP proxies, Philip Newton provided a list allegedly
- from ncftp. This is not a subject without debate, and is probably not really
- suitable for libcurl. http://curl.haxx.se/mail/archive-2003-04/0126.html
-
-4.6 ASCII support
-
- FTP ASCII transfers do not follow RFC959. They don't convert the data
- accordingly.
-
-5. HTTP
-
-5.1 Better persistency for HTTP 1.0
-
- "Better" support for persistent connections over HTTP 1.0
- http://curl.haxx.se/bug/feature.cgi?id=1089001
-
-5.2 support FF3 sqlite cookie files
-
- Firefox 3 is changing from its former format to a a sqlite database instead.
- We should consider how (lib)curl can/should support this.
- http://curl.haxx.se/bug/feature.cgi?id=1871388
-
-5.3 Rearrange request header order
-
- Server implementors often make an effort to detect browser and to reject
- clients it can detect to not match. One of the last details we cannot yet
- control in libcurl's HTTP requests, which also can be exploited to detect
- that libcurl is in fact used even when it tries to impersonate a browser, is
- the order of the request headers. I propose that we introduce a new option in
- which you give headers a value, and then when the HTTP request is built it
- sorts the headers based on that number. We could then have internally created
- headers use a default value so only headers that need to be moved have to be
- specified.
-
-5.4 HTTP2/SPDY
-
- The first drafts for HTTP2 have been published
- (http://tools.ietf.org/html/draft-ietf-httpbis-http2-03) and is so far based
- on SPDY (http://www.chromium.org/spdy) designs and experiences. Chances are
- it will end up in that style. Chrome and Firefox already support SPDY and
- lots of web services do.
-
- It would make sense to implement SPDY support now and later transition into
- or add HTTP2 support as well.
-
- We should base or HTTP2/SPDY work on a 3rd party library for the protocol
- fiddling. The Spindy library (http://spindly.haxx.se/) was an attempt to make
- such a library with an API suitable for use by libcurl but that effort has
- more or less stalled. spdylay (https://github.com/tatsuhiro-t/spdylay) may
- be a better option, either used directly or wrapped with a more spindly-like
- API.
-
-
-6. TELNET
-
-6.1 ditch stdin
-
-Reading input (to send to the remote server) on stdin is a crappy solution for
-library purposes. We need to invent a good way for the application to be able
-to provide the data to send.
-
-6.2 ditch telnet-specific select
-
- Move the telnet support's network select() loop go away and merge the code
- into the main transfer loop. Until this is done, the multi interface won't
- work for telnet.
-
-6.3 feature negotiation debug data
-
- Add telnet feature negotiation data to the debug callback as header data.
-
-6.4 send data in chunks
-
- Currently, telnet sends data one byte at a time. This is fine for interactive
- use, but inefficient for any other. Sent data should be sent in larger
- chunks.
-
-7. SMTP
-
-7.1 Pipelining
-
- Add support for pipelining emails.
-
-7.2 Graceful base64 decoding failure
-
- Rather than shutting down the session and returning an error when the
- decoding of a base64 encoded authentication response fails, we should
- gracefully shutdown the authentication process by sending a * response to the
- server as per RFC4954.
-
-7.3 Enhanced capability support
-
- Add the ability, for an application that uses libcurl, to obtain the list of
- capabilities returned from the EHLO command.
-
-8. POP3
-
-8.1 Pipelining
-
- Add support for pipelining commands.
-
-8.2 Graceful base64 decoding failure
-
- Rather than shutting down the session and returning an error when the
- decoding of a base64 encoded authentication response fails, we should
- gracefully shutdown the authentication process by sending a * response to the
- server as per RFC5034.
-
-8.3 Enhanced capability support
-
- Add the ability, for an application that uses libcurl, to obtain the list of
- capabilities returned from the CAPA command.
-
-9. IMAP
-
-9.1 Graceful base64 decoding failure
-
- Rather than shutting down the session and returning an error when the
- decoding of a base64 encoded authentication response fails, we should
- gracefully shutdown the authentication process by sending a * response to the
- server as per RFC3501.
-
-9.2 Enhanced capability support
-
- Add the ability, for an application that uses libcurl, to obtain the list of
- capabilities returned from the CAPABILITY command.
-
-10. LDAP
-
-10.1 SASL based authentication mechanisms
-
- Currently the LDAP module only supports ldap_simple_bind_s() in order to bind
- to an LDAP server. However, this function sends username and password details
- using the simple authentication mechanism (as clear text). However, it should
- be possible to use ldap_bind_s() instead specifing the security context
- information ourselves.
-
-11. New protocols
-
-11.1 RSYNC
-
- There's no RFC for the protocol or an URI/URL format. An implementation
- should most probably use an existing rsync library, such as librsync.
-
-12. SSL
-
-12.1 Disable specific versions
-
- Provide an option that allows for disabling specific SSL versions, such as
- SSLv2 http://curl.haxx.se/bug/feature.cgi?id=1767276
-
-12.2 Provide mutex locking API
-
- Provide a libcurl API for setting mutex callbacks in the underlying SSL
- library, so that the same application code can use mutex-locking
- independently of OpenSSL or GnutTLS being used.
-
-12.3 Evaluate SSL patches
-
- Evaluate/apply Gertjan van Wingerde's SSL patches:
- http://curl.haxx.se/mail/lib-2004-03/0087.html
-
-12.4 Cache OpenSSL contexts
-
- "Look at SSL cafile - quick traces look to me like these are done on every
- request as well, when they should only be necessary once per ssl context (or
- once per handle)". The major improvement we can rather easily do is to make
- sure we don't create and kill a new SSL "context" for every request, but
- instead make one for every connection and re-use that SSL context in the same
- style connections are re-used. It will make us use slightly more memory but
- it will libcurl do less creations and deletions of SSL contexts.
-
-12.5 Export session ids
-
- Add an interface to libcurl that enables "session IDs" to get
- exported/imported. Cris Bailiff said: "OpenSSL has functions which can
- serialise the current SSL state to a buffer of your choice, and recover/reset
- the state from such a buffer at a later date - this is used by mod_ssl for
- apache to implement and SSL session ID cache".
-
-12.6 Provide callback for cert verification
-
- OpenSSL supports a callback for customised verification of the peer
- certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
- it be? There's so much that could be done if it were!
-
-12.7 Support other SSL libraries
-
- Make curl's SSL layer capable of using other free SSL libraries. Such as
- MatrixSSL (http://www.matrixssl.org/).
-
-12.8 improve configure --with-ssl
-
- make the configure --with-ssl option first check for OpenSSL, then GnuTLS,
- then NSS...
-
-12.9 Support DANE
-
- DNS-Based Authentication of Named Entities (DANE) is a way to provide SSL
- keys and certs over DNS using DNSSEC as an alternative to the CA model.
- http://www.rfc-editor.org/rfc/rfc6698.txt
-
- An initial patch was posted by Suresh Krishnaswamy on March 7th 2013
- (http://curl.haxx.se/mail/lib-2013-03/0075.html) but it was a too simple
- approach. See Daniel's comments:
- http://curl.haxx.se/mail/lib-2013-03/0103.html . libunbound may be the
- correct library to base this development on.
-
-13. GnuTLS
-
-13.1 SSL engine stuff
-
- Is this even possible?
-
-13.2 check connection
-
- Add a way to check if the connection seems to be alive, to correspond to the
- SSL_peak() way we use with OpenSSL.
-
-14. SASL
-
-14.1 Other authentication mechanisms
-
- Add support for GSSAPI to SMTP, POP3 and IMAP.
-
-15. Client
-
-15.1 sync
-
- "curl --sync http://example.com/feed[1-100].rss" or
- "curl --sync http://example.net/{index,calendar,history}.html"
-
- Downloads a range or set of URLs using the remote name, but only if the
- remote file is newer than the local file. A Last-Modified HTTP date header
- should also be used to set the mod date on the downloaded file.
-
-15.2 glob posts
-
- Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
- This is easily scripted though.
-
-15.3 prevent file overwriting
-
- Add an option that prevents cURL from overwriting existing local files. When
- used, and there already is an existing file with the target file name
- (either -O or -o), a number should be appended (and increased if already
- existing). So that index.html becomes first index.html.1 and then
- index.html.2 etc.
-
-15.4 simultaneous parallel transfers
-
- The client could be told to use maximum N simultaneous parallel transfers and
- then just make sure that happens. It should of course not make more than one
- connection to the same remote host. This would require the client to use the
- multi interface. http://curl.haxx.se/bug/feature.cgi?id=1558595
-
-15.5 provide formpost headers
-
- Extending the capabilities of the multipart formposting. How about leaving
- the ';type=foo' syntax as it is and adding an extra tag (headers) which
- works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where
- fil1.hdr contains extra headers like
-
- Content-Type: text/plain; charset=KOI8-R"
- Content-Transfer-Encoding: base64
- X-User-Comment: Please don't use browser specific HTML code
-
- which should overwrite the program reasonable defaults (plain/text,
- 8bit...)
-
-15.6 url-specific options
-
- Provide a way to make options bound to a specific URL among several on the
- command line. Possibly by letting ':' separate options between URLs,
- similar to this:
-
- curl --data foo --url url.com : \
- --url url2.com : \
- --url url3.com --data foo3
-
- (More details: http://curl.haxx.se/mail/archive-2004-07/0133.html)
-
- The example would do a POST-GET-POST combination on a single command line.
-
-15.7 warning when setting an option
-
- Display a warning when libcurl returns an error when setting an option.
- This can be useful to tell when support for a particular feature hasn't been
- compiled into the library.
-
-15.8 IPv6 addresses with globbing
-
- Currently the command line client needs to get url globbing disabled (with
- -g) for it to support IPv6 numerical addresses. This is a rather silly flaw
- that should be corrected. It probably involves a smarter detection of the
- '[' and ']' letters.
-
-16. Build
-
-16.1 roffit
-
- Consider extending 'roffit' to produce decent ASCII output, and use that
- instead of (g)nroff when building src/tool_hugehelp.c
-
-17. Test suite
-
-17.1 SSL tunnel
-
- Make our own version of stunnel for simple port forwarding to enable HTTPS
- and FTP-SSL tests without the stunnel dependency, and it could allow us to
- provide test tools built with either OpenSSL or GnuTLS
-
-17.2 nicer lacking perl message
-
- If perl wasn't found by the configure script, don't attempt to run the tests
- but explain something nice why it doesn't.
-
-17.3 more protocols supported
-
- Extend the test suite to include more protocols. The telnet could just do ftp
- or http operations (for which we have test servers).
-
-17.4 more platforms supported
-
- Make the test suite work on more platforms. OpenBSD and Mac OS. Remove
- fork()s and it should become even more portable.
-
-18. Next SONAME bump
-
-18.1 http-style HEAD output for ftp
-
- #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers
- from being output in NOBODY requests over ftp
-
-18.2 combine error codes
-
- Combine some of the error codes to remove duplicates. The original
- numbering should not be changed, and the old identifiers would be
- macroed to the new ones in an CURL_NO_OLDIES section to help with
- backward compatibility.
-
- Candidates for removal and their replacements:
-
- CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND
-
- CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND
-
- CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR
-
- CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT
-
- CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT
-
- CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL
-
- CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND
-
- CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED
-
-18.3 extend CURLOPT_SOCKOPTFUNCTION prototype
-
- The current prototype only provides 'purpose' that tells what the
- connection/socket is for, but not any protocol or similar. It makes it hard
- for applications to differentiate on TCP vs UDP and even HTTP vs FTP and
- similar.
-
-10. Next major release
-
-19.1 cleanup return codes
-
- curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a
- CURLMcode. These should be changed to be the same.
-
-19.2 remove obsolete defines
-
- remove obsolete defines from curl/curl.h
-
-19.3 size_t
-
- make several functions use size_t instead of int in their APIs
-
-19.4 remove several functions
-
- remove the following functions from the public API:
-
- curl_getenv
-
- curl_mprintf (and variations)
-
- curl_strequal
-
- curl_strnequal
-
- They will instead become curlx_ - alternatives. That makes the curl app
- still capable of using them, by building with them from source.
-
- These functions have no purpose anymore:
-
- curl_multi_socket
-
- curl_multi_socket_all
-
-19.5 remove CURLOPT_FAILONERROR
-
- Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird
- internally. Let the app judge success or not for itself.
-
-19.6 remove CURLOPT_DNS_USE_GLOBAL_CACHE
-
- Remove support for a global DNS cache. Anything global is silly, and we
- already offer the share interface for the same functionality but done
- "right".
-
-19.7 remove progress meter from libcurl
-
- The internally provided progress meter output doesn't belong in the library.
- Basically no application wants it (apart from curl) but instead applications
- can and should do their own progress meters using the progress callback.
-
- The progress callback should then be bumped as well to get proper 64bit
- variable types passed to it instead of doubles so that big files work
- correctly.
-
-19.8 remove 'curl_httppost' from public
-
- curl_formadd() was made to fill in a public struct, but the fact that the
- struct is public is never really used by application for their own advantage
- but instead often restricts how the form functions can or can't be modified.
-
- Changing them to return a private handle will benefit the implementation and
- allow us much greater freedoms while still maintining a solid API and ABI.
-
-19.9 have form functions use CURL handle argument
-
- curl_formadd() and curl_formget() both currently have no CURL handle
- argument, but both can use a callback that is set in the easy handle, and
- thus curl_formget() with callback cannot function without first having
- curl_easy_perform() (or similar) called - which is hard to grasp and a design
- mistake.
-
-19.10 Add CURLOPT_MAIL_CLIENT option
-
- Rather than use the URL to specify the mail client string to present in the
- HELO and EHLO commands, libcurl should support a new CURLOPT specifically for
- specifing this data as the URL is non-standard and to be honest a bit of a
- hack ;-)
-
- Please see the following thread for more information:
- http://curl.haxx.se/mail/lib-2012-05/0178.html
-