summaryrefslogtreecommitdiff
path: root/plugins/FTPFileYM/curl-7.29.0/docs/KNOWN_BUGS
diff options
context:
space:
mode:
Diffstat (limited to 'plugins/FTPFileYM/curl-7.29.0/docs/KNOWN_BUGS')
-rw-r--r--plugins/FTPFileYM/curl-7.29.0/docs/KNOWN_BUGS242
1 files changed, 0 insertions, 242 deletions
diff --git a/plugins/FTPFileYM/curl-7.29.0/docs/KNOWN_BUGS b/plugins/FTPFileYM/curl-7.29.0/docs/KNOWN_BUGS
deleted file mode 100644
index d36382740e..0000000000
--- a/plugins/FTPFileYM/curl-7.29.0/docs/KNOWN_BUGS
+++ /dev/null
@@ -1,242 +0,0 @@
-These are problems known to exist at the time of this release. Feel free to
-join in and help us correct one or more of these! Also be sure to check the
-changelog of the current development status, as one or more of these problems
-may have been fixed since this was written!
-
-80. Curl doesn't recognize certificates in DER format in keychain, but it
- works with PEM.
- http://curl.haxx.se/bug/view.cgi?id=3439999
-
-79. SMTP. When sending data to multiple recipients, curl will abort and return
- failure if one of the recipients indicate failure (on the "RCPT TO"
- command). Ordinary mail programs would proceed and still send to the ones
- that can receive data. This is subject for change in the future.
- http://curl.haxx.se/bug/view.cgi?id=3438362
-
-78. curl and libcurl don't always signal the client properly when "sending"
- zero bytes files - it makes for example the command line client not creating
- any file at all. Like when using FTP.
- http://curl.haxx.se/bug/view.cgi?id=3438362
-
-77. CURLOPT_FORBID_REUSE on a handle prevents NTLM from working since it
- "abuses" the underlying connection re-use system and if connections are
- forced to close they break the NTLM support.
-
-76. The SOCKET type in Win64 is 64 bits large (and thus so is curl_socket_t on
- that platform), and long is only 32 bits. It makes it impossible for
- curl_easy_getinfo() to return a socket properly with the CURLINFO_LASTSOCKET
- option as for all other operating systems.
-
-75. NTLM authentication involving unicode user name or password only works
- properly if built with UNICODE defined together with the schannel/winssl
- backend. The original problem was mentioned in:
- http://curl.haxx.se/mail/lib-2009-10/0024.html
- http://curl.haxx.se/bug/view.cgi?id=2944325
-
- The schannel version verified to work as mentioned in
- http://curl.haxx.se/mail/lib-2012-07/0073.html
-
-73. if a connection is made to a FTP server but the server then just never
- sends the 220 response or otherwise is dead slow, libcurl will not
- acknowledge the connection timeout during that phase but only the "real"
- timeout - which may surprise users as it is probably considered to be the
- connect phase to most people. Brought up (and is being misunderstood) in:
- http://curl.haxx.se/bug/view.cgi?id=2844077
-
-72. "Pausing pipeline problems."
- http://curl.haxx.se/mail/lib-2009-07/0214.html
-
-70. Problem re-using easy handle after call to curl_multi_remove_handle
- http://curl.haxx.se/mail/lib-2009-07/0249.html
-
-68. "More questions about ares behavior".
- http://curl.haxx.se/mail/lib-2009-08/0012.html
-
-67. When creating multipart formposts. The file name part can be encoded with
- something beyond ascii but currently libcurl will only pass in the verbatim
- string the app provides. There are several browsers that already do this
- encoding. The key seems to be the updated draft to RFC2231:
- http://tools.ietf.org/html/draft-reschke-rfc2231-in-http-02
-
-66. When using telnet, the time limitation options don't work.
- http://curl.haxx.se/bug/view.cgi?id=2818950
-
-65. When doing FTP over a socks proxy or CONNECT through HTTP proxy and the
- multi interface is used, libcurl will fail if the (passive) TCP connection
- for the data transfer isn't more or less instant as the code does not
- properly wait for the connect to be confirmed. See test case 564 for a first
- shot at a test case.
-
-63. When CURLOPT_CONNECT_ONLY is used, the handle cannot reliably be re-used
- for any further requests or transfers. The work-around is then to close that
- handle with curl_easy_cleanup() and create a new. Some more details:
- http://curl.haxx.se/mail/lib-2009-04/0300.html
-
-61. If an upload using Expect: 100-continue receives an HTTP 417 response,
- it ought to be automatically resent without the Expect:. A workaround is
- for the client application to redo the transfer after disabling Expect:.
- http://curl.haxx.se/mail/archive-2008-02/0043.html
-
-60. libcurl closes the connection if an HTTP 401 reply is received while it
- is waiting for the the 100-continue response.
- http://curl.haxx.se/mail/lib-2008-08/0462.html
-
-58. It seems sensible to be able to use CURLOPT_NOBODY and
- CURLOPT_FAILONERROR with FTP to detect if a file exists or not, but it is
- not working: http://curl.haxx.se/mail/lib-2008-07/0295.html
-
-57. On VMS-Alpha: When using an http-file-upload the file is not sent to the
- Server with the correct content-length. Sending a file with 511 or less
- bytes, content-length 512 is used. Sending a file with 513 - 1023 bytes,
- content-length 1024 is used. Files with a length of a multiple of 512 Bytes
- show the correct content-length. Only these files work for upload.
- http://curl.haxx.se/bug/view.cgi?id=2057858
-
-56. When libcurl sends CURLOPT_POSTQUOTE commands when connected to a SFTP
- server using the multi interface, the commands are not being sent correctly
- and instead the connection is "cancelled" (the operation is considered done)
- prematurely. There is a half-baked (busy-looping) patch provided in the bug
- report but it cannot be accepted as-is. See
- http://curl.haxx.se/bug/view.cgi?id=2006544
-
-55. libcurl fails to build with MIT Kerberos for Windows (KfW) due to KfW's
- library header files exporting symbols/macros that should be kept private
- to the KfW library. See ticket #5601 at http://krbdev.mit.edu/rt/
-
-52. Gautam Kachroo's issue that identifies a problem with the multi interface
- where a connection can be re-used without actually being properly
- SSL-negotiated:
- http://curl.haxx.se/mail/lib-2008-01/0277.html
-
-49. If using --retry and the transfer timeouts (possibly due to using -m or
- -y/-Y) the next attempt doesn't resume the transfer properly from what was
- downloaded in the previous attempt but will truncate and restart at the
- original position where it was at before the previous failed attempt. See
- http://curl.haxx.se/mail/lib-2008-01/0080.html and Mandriva bug report
- https://qa.mandriva.com/show_bug.cgi?id=22565
-
-48. If a CONNECT response-headers are larger than BUFSIZE (16KB) when the
- connection is meant to be kept alive (like for NTLM proxy auth), the
- function will return prematurely and will confuse the rest of the HTTP
- protocol code. This should be very rare.
-
-43. There seems to be a problem when connecting to the Microsoft telnet server.
- http://curl.haxx.se/bug/view.cgi?id=1720605
-
-41. When doing an operation over FTP that requires the ACCT command (but not
- when logging in), the operation will fail since libcurl doesn't detect this
- and thus fails to issue the correct command:
- http://curl.haxx.se/bug/view.cgi?id=1693337
-
-39. Steffen Rumler's Race Condition in Curl_proxyCONNECT:
- http://curl.haxx.se/mail/lib-2007-01/0045.html
-
-38. Kumar Swamy Bhatt's problem in ftp/ssl "LIST" operation:
- http://curl.haxx.se/mail/lib-2007-01/0103.html
-
-35. Both SOCKS5 and SOCKS4 proxy connections are done blocking, which is very
- bad when used with the multi interface.
-
-34. The SOCKS4 connection codes don't properly acknowledge (connect) timeouts.
- Also see #12. According to bug #1556528, even the SOCKS5 connect code does
- not do it right: http://curl.haxx.se/bug/view.cgi?id=1556528,
-
-31. "curl-config --libs" will include details set in LDFLAGS when configure is
- run that might be needed only for building libcurl. Further, curl-config
- --cflags suffers from the same effects with CFLAGS/CPPFLAGS.
-
-30. You need to use -g to the command line tool in order to use RFC2732-style
- IPv6 numerical addresses in URLs.
-
-29. IPv6 URLs with zone ID is not nicely supported.
- http://www.ietf.org/internet-drafts/draft-fenner-literal-zone-02.txt (expired)
- specifies the use of a plus sign instead of a percent when specifying zone
- IDs in URLs to get around the problem of percent signs being
- special. According to the reporter, Firefox deals with the URL _with_ a
- percent letter (which seems like a blatant URL spec violation).
- libcurl supports zone IDs where the percent sign is URL-escaped (i.e. %25).
-
- See http://curl.haxx.se/bug/view.cgi?id=1371118
-
-26. NTLM authentication using SSPI (on Windows) when (lib)curl is running in
- "system context" will make it use wrong(?) user name - at least when compared
- to what winhttp does. See http://curl.haxx.se/bug/view.cgi?id=1281867
-
-23. SOCKS-related problems:
- B) libcurl doesn't support FTPS over a SOCKS proxy.
- E) libcurl doesn't support active FTP over a SOCKS proxy
-
- We probably have even more bugs and lack of features when a SOCKS proxy is
- used.
-
-22. Sending files to a FTP server using curl on VMS, might lead to curl
- complaining on "unaligned file size" on completion. The problem is related
- to VMS file structures and the perceived file sizes stat() returns. A
- possible fix would involve sending a "STRU VMS" command.
- http://curl.haxx.se/bug/view.cgi?id=1156287
-
-21. FTP ASCII transfers do not follow RFC959. They don't convert the data
- accordingly (not for sending nor for receiving). RFC 959 section 3.1.1.1
- clearly describes how this should be done:
-
- The sender converts the data from an internal character representation to
- the standard 8-bit NVT-ASCII representation (see the Telnet
- specification). The receiver will convert the data from the standard
- form to his own internal form.
-
- Since 7.15.4 at least line endings are converted.
-
-16. FTP URLs passed to curl may contain NUL (0x00) in the RFC 1738 <user>,
- <password>, and <fpath> components, encoded as "%00". The problem is that
- curl_unescape does not detect this, but instead returns a shortened C
- string. From a strict FTP protocol standpoint, NUL is a valid character
- within RFC 959 <string>, so the way to handle this correctly in curl would
- be to use a data structure other than a plain C string, one that can handle
- embedded NUL characters. From a practical standpoint, most FTP servers
- would not meaningfully support NUL characters within RFC 959 <string>,
- anyway (e.g., UNIX pathnames may not contain NUL).
-
-14. Test case 165 might fail on a system which has libidn present, but with an
- old iconv version (2.1.3 is a known bad version), since it doesn't recognize
- the charset when named ISO8859-1. Changing the name to ISO-8859-1 makes the
- test pass, but instead makes it fail on Solaris hosts that use its native
- iconv.
-
-13. curl version 7.12.2 fails on AIX if compiled with --enable-ares.
- The workaround is to combine --enable-ares with --disable-shared
-
-12. When connecting to a SOCKS proxy, the (connect) timeout is not properly
- acknowledged after the actual TCP connect (during the SOCKS "negotiate"
- phase).
-
-10. To get HTTP Negotiate authentication to work fine, you need to provide a
- (fake) user name (this concerns both curl and the lib) because the code
- wrongly only considers authentication if there's a user name provided.
- http://curl.haxx.se/bug/view.cgi?id=1004841. How?
- http://curl.haxx.se/mail/lib-2004-08/0182.html
-
-8. Doing resumed upload over HTTP does not work with '-C -', because curl
- doesn't do a HEAD first to get the initial size. This needs to be done
- manually for HTTP PUT resume to work, and then '-C [index]'.
-
-6. libcurl ignores empty path parts in FTP URLs, whereas RFC1738 states that
- such parts should be sent to the server as 'CWD ' (without an argument).
- The only exception to this rule, is that we knowingly break this if the
- empty part is first in the path, as then we use the double slashes to
- indicate that the user wants to reach the root dir (this exception SHALL
- remain even when this bug is fixed).
-
-5. libcurl doesn't treat the content-length of compressed data properly, as
- it seems HTTP servers send the *uncompressed* length in that header and
- libcurl thinks of it as the *compressed* length. Some explanations are here:
- http://curl.haxx.se/mail/lib-2003-06/0146.html
-
-2. If a HTTP server responds to a HEAD request and includes a body (thus
- violating the RFC2616), curl won't wait to read the response but just stop
- reading and return back. If a second request (let's assume a GET) is then
- immediately made to the same server again, the connection will be re-used
- fine of course, and the second request will be sent off but when the
- response is to get read, the previous response-body is what curl will read
- and havoc is what happens.
- More details on this is found in this libcurl mailing list thread:
- http://curl.haxx.se/mail/lib-2002-08/0000.html