71 Commits
v1.83 ... v1.84

Author SHA1 Message Date
06032aa661 Merge pull request #795 from ggtakec/master
Updated ChangeLog and configure.ac for release 1.84
2018-07-08 18:22:54 +09:00
e8fb2aefb3 Updated ChangeLog and configure.ac for release 1.84 2018-07-08 09:06:52 +00:00
3cb6c5e161 Merge pull request #793 from ggtakec/master
Added list_object_max_keys option based on #783 PR
2018-07-08 13:08:47 +09:00
7e0c53dfe9 Added list_object_max_keys option based on #783 PR 2018-07-08 03:49:10 +00:00
c2ca7e43b6 Merge pull request #789 from juliogonzalez/doc-opensuse-suse
Instructions for SUSE and openSUSE prebuilt packages
2018-07-08 11:42:05 +09:00
ae47d5d349 Merge pull request #786 from ambiknai/log_enhancements
Log messages for 5xx and 4xx HTTP response code
2018-07-08 11:28:54 +09:00
35d3fce7a0 Review comment: Include the error code being returned 2018-07-06 05:14:32 -04:00
4177d8bd3b Review comment: Include the error code being returned 2018-07-06 03:03:57 -04:00
ad5349a488 Changes as per review comments 2018-07-05 05:02:04 -04:00
6b57a8c1fc Instructions for SUSE and openSUSE prebuilt packages 2018-07-05 10:23:26 +02:00
92a4034c5e Log messages for 5xx and 4xx HTTP response code 2018-07-04 03:50:45 -04:00
3e4002df0d Merge pull request #780 from wjt/initialize-libgcry
gnutls_auth: initialize libgcrypt
2018-06-24 12:48:08 +09:00
1b9ec7f4fc Merge pull request #774 from nkkashyap/master
Option for IAM authentication endpoint
2018-06-24 12:36:23 +09:00
4a7c4a9e9d Merge pull request #781 from ggtakec/master
Fixed an error by cppcheck on OSX
2018-06-24 12:22:35 +09:00
0d3fb0658a Fixed a error by cppcheck on OSX 2018-06-24 02:38:59 +00:00
73cf2ba95d gnutls_auth: initialize libgcrypt
Without this change, the following warning appears in the syslog/journal
during startup:

  Libgcrypt warning: missing initialization - please fix the application

From the [documentation][0]:

> The function `gcry_check_version` initializes some subsystems used by
> Libgcrypt and must be invoked before any other function in the
> library.

Fixes #524, which says:

> gnutls is initialized by gnutls_global_init() function and
> gcry_check_version() function for initializing libgcry is called from
> this gnutls_global_init().

I checked the gnutls source and it hasn't contained a call to
gcry_check_version() since the libgcrypt backend was removed in 2011
(commit 8116cdc8f131edd586dad3128ae35dd744cfc32f). In any case, the
gcry_check_version() documentation continues:

> It is important that these initialization steps are not done by a
> library but by the actual application.

so it would be incorrect for a library used by s3fs to initialize
libgcrypt.

[0]: https://www.gnupg.org/documentation/manuals/gcrypt/Initializing-the-library.html
2018-06-21 20:55:00 +01:00
5a481e6a01 Option for IBM IAM auth endpoint added return 2018-06-04 16:44:14 +05:30
d8e12839af Option for IBM IAM auth endpoint 2018-05-31 16:02:48 +05:30
3bf05dabea Merge pull request #769 from orozery/revert_to_async_read
Revert "enable FUSE read_sync by default"
2018-05-28 20:23:54 +09:00
d4e86a17d1 Revert "enable FUSE read_sync by default"
This reverts commit 86b0921ac4.

Conflicts:
	src/s3fs.cpp
2018-05-28 13:49:54 +03:00
6555e7ebb0 Merge pull request #768 from ggtakec/master
Fixed memory leak
2018-05-27 20:10:16 +09:00
ae9d8eb734 Fixed memory leak 2018-05-27 10:48:03 +00:00
e49d594db4 Merge pull request #766 from gaul/s3fs-python
Remove s3fs-python
2018-05-27 16:43:27 +09:00
66bb0898db Merge pull request #765 from gaul/debian
Add Debian installation instructions
2018-05-27 16:35:51 +09:00
b323312312 Remove s3fs-python
This no longer exists.
2018-05-23 16:06:41 -07:00
58e52bad4f Add Debian installation instructions 2018-05-23 16:03:02 -07:00
57b2a60172 Merge pull request #764 from orozery/remove_false_multihead_warnings
Remove false multihead warnings
2018-05-23 22:38:35 +09:00
212bbbbdf0 Merge pull request #763 from orozery/cleanup_share_after_handles
cleanup curl handles before curl share
2018-05-23 22:30:36 +09:00
a0e62b5588 Merge pull request #762 from gaul/s3proxy-1.6.0
Upgrade to S3Proxy 1.6.0
2018-05-23 22:23:33 +09:00
e9831dd772 Merge pull request #761 from gaul/ubuntu-16.04
Simplify installation for Ubuntu 16.04
2018-05-23 22:15:01 +09:00
da95afba8a Merge pull request #756 from orozery/optimize_defaults
Optimize defaults
2018-05-23 22:05:00 +09:00
0bd875eb9e remove false readdir_multi_head warnings 2018-05-22 17:10:50 +03:00
af63a42773 cleanup curl handles before curl share 2018-05-21 13:20:09 +03:00
ad9a374229 Simplify installation for Ubuntu 16.04
Also reorganize installation vs. compilation.
2018-05-16 17:40:13 -07:00
1b86e4d414 Upgrade to S3Proxy 1.6.0
Release notes:

https://github.com/gaul/s3proxy/releases/tag/s3proxy-1.6.0
https://github.com/gaul/s3proxy/releases/tag/s3proxy-1.5.5
https://github.com/gaul/s3proxy/releases/tag/s3proxy-1.5.4
2018-05-16 16:38:17 -07:00
86b0921ac4 enable FUSE read_sync by default 2018-05-06 16:10:36 +03:00
dbe98dcbd2 Merge pull request #755 from ggtakec/master
Added reset curl handle when returning to handle pool
2018-05-06 21:35:39 +09:00
4a72b60707 increase default stat cache size from 1000 to 100000 2018-05-06 15:31:07 +03:00
7a4696fc17 recommend openssl over gnutls for performance 2018-05-06 15:29:42 +03:00
e3de6ea458 Added reset curl handle when returning to handle pool 2018-05-06 12:11:53 +00:00
1db4739ed8 Merge pull request #754 from nkkashyap/master
Validate the URL format for http/https
2018-05-06 21:02:33 +09:00
25375a6b48 Validate the URL fixed inefficient usage of find 2018-05-04 11:24:32 +05:30
ca87df7d44 Validate the URL format for http/https 2018-05-03 22:08:28 +05:30
d052dc0b9d Merge pull request #753 from cfz/master
fix xpath selector in bucket listing
2018-05-02 12:04:12 +09:00
3f542e9cf5 Merge pull request #745 from orozery/handle_mkdir_exists
don't fail mkdir when directory exists
2018-05-02 11:37:18 +09:00
04493de767 fix xpath selector in bucket listing
the original implementation in get_base_exp() depends on the order of xml return from the server.
patriotically, when listing a directory with sub directory(s), the xml document response contains more than 2 <Prefix> nodes(some of them are in <CommonPrefixes> node).
the source code arbitrarily select the first one in the documents (nodes->nodeTab[0]->xmlChildrenNode).
some s3 compatible service return the list-bucket result in different result, leading the s3fs to a wrong behavior
2018-04-23 15:11:29 +08:00
4fdab46617 don't fail mkdir when directory exists 2018-04-08 11:13:47 +03:00
1a23b880d5 Merge pull request #739 from orozery/cleanup_failing_curl_handles
cleanup curl handle state on retries
2018-04-01 22:45:04 +09:00
b3c376afbe Merge pull request #733 from phxvyper/enhance/dupe-bucket-error
More useful error message for dupe entries in passwd file
2018-04-01 22:11:00 +09:00
adcf5754ae cleanup failing curl handles on retries 2018-03-29 13:56:08 +03:00
0863672e27 add a more helpful error message for when there are multiple entries for the same bucket in the passwd file 2018-03-13 14:37:34 -07:00
0f503ced25 Merge pull request #729 from dmgk/master
FreeBSD build fixes
2018-03-04 16:36:31 +09:00
987a166bf4 Merge pull request #726 from orozery/instance_name_logging
add an instance_name option for logging
2018-03-04 15:41:12 +09:00
57b6f0eeaf Merge pull request #724 from orozery/dont_fail_multirequest
don't fail multirequest on single thread error
2018-03-04 15:35:29 +09:00
f71a28f9b9 Merge pull request #714 from orozery/reduce_lock_contention
reduce lock contention on file open
2018-03-04 13:36:08 +09:00
45c7ea9194 Merge pull request #710 from orozery/disk_space_reservation
add disk space reservation
2018-03-04 13:27:25 +09:00
c9f4312588 FreeBSD build fixes 2018-03-02 15:58:52 -05:00
8b657eee41 add disk space reservation 2018-02-28 19:20:23 +02:00
b9c9de7f97 Merge pull request #712 from chrilith/master
Added Cygwin build options
2018-02-28 23:07:54 +09:00
e559f05326 Merge pull request #704 from vadimeremeev/patch-1
Update README.md with details about .passwd-s3fs
2018-02-28 22:22:01 +09:00
824124fedc Merge pull request #727 from ggtakec/master
Fixed Travis CI error about cppcheck - #713
2018-02-28 22:04:04 +09:00
be9d407fa0 Fixed cppcheck error on osx 2018-02-28 12:29:58 +00:00
c494e54320 Fixed cppcheck error on osx 2018-02-28 12:06:06 +00:00
b52b6f3fc5 add an instance_name option for logging 2018-02-28 09:51:35 +02:00
82c9733101 don't fail multirequest on single thread error 2018-02-26 12:06:08 +02:00
a45ff6cdaa Fixed cppcheck error and clean ^M code 2018-02-25 13:08:41 +00:00
960d45c853 Fixed cppcheck error on osx 2018-02-25 08:51:19 +00:00
246b767b64 Remove space in front of ~/.passwd-s3fs 2018-02-05 16:49:02 +07:00
0edf056e95 reduce lock contention on file open 2018-02-04 17:13:58 +02:00
88819af2d8 Added Cygwin build options 2018-02-02 15:58:10 +01:00
b048c981ad Update README.md with details about .passwd-s3fs 2017-12-22 16:20:02 +07:00
19 changed files with 633 additions and 386 deletions

1
.gitattributes vendored Normal file
View File

@ -0,0 +1 @@
* text eol=lf

View File

@ -1,6 +1,37 @@
ChangeLog for S3FS
------------------
Version 1.84 -- Jul 8, 2018
#704 - Update README.md with details about .passwd-s3fs
#710 - add disk space reservation
#712 - Added Cygwin build options
#714 - reduce lock contention on file open
#724 - don't fail multirequest on single thread error
#726 - add an instance_name option for logging
#727 - Fixed Travis CI error about cppcheck - #713
#729 - FreeBSD build fixes
#733 - More useful error message for dupe entries in passwd file
#739 - cleanup curl handle state on retries
#745 - don't fail mkdir when directory exists
#753 - fix xpath selector in bucket listing
#754 - Validate the URL format for http/https
#755 - Added reset curl handle when returning to handle pool
#756 - Optimize defaults
#761 - Simplify installation for Ubuntu 16.04
#762 - Upgrade to S3Proxy 1.6.0
#763 - cleanup curl handles before curl share
#764 - Remove false multihead warnings
#765 - Add Debian installation instructions
#766 - Remove s3fs-python
#768 - Fixed memory leak
#769 - Revert "enable FUSE read_sync by default"
#774 - Option for IAM authentication endpoint
#780 - gnutls_auth: initialize libgcrypt
#781 - Fixed an error by cppcheck on OSX
#786 - Log messages for 5xx and 4xx HTTP response code
#789 - Instructions for SUSE and openSUSE prebuilt packages
#793 - Added list_object_max_keys option based on #783 PR
Version 1.83 -- Dec 17, 2017
#606 - Add Homebrew instructions
#608 - Fix chown_nocopy losing existing uid/gid if unspecified

View File

@ -32,11 +32,12 @@ cppcheck:
cppcheck --quiet --error-exitcode=1 \
--inline-suppr \
--std=c++03 \
-D HAVE_ATTR_XATTR_H \
-D HAVE_SYS_EXTATTR_H \
-D HAVE_MALLOC_TRIM \
-U CURLE_PEER_FAILED_VERIFICATION \
-U P_tmpdir \
-U ENOATTR \
--enable=all \
--enable=warning,style,information,missingInclude \
--suppress=missingIncludeSystem \
--suppress=unusedFunction \
--suppress=variableScope \
src/ test/

View File

@ -22,12 +22,36 @@ Features
Installation
------------
Some systems provide pre-built packages:
* On Debian 9 and Ubuntu 16.04 or newer:
```
sudo apt-get install s3fs
```
* On SUSE 12 or newer and openSUSE 42.1 or newer:
```
sudo zypper in s3fs
```
* On Mac OS X, install via [Homebrew](http://brew.sh/):
```ShellSession
$ brew cask install osxfuse
$ brew install s3fs
```
Compilation
-----------
* On Linux, ensure you have all the dependencies:
On Ubuntu 14.04:
```
sudo apt-get install automake autotools-dev fuse g++ git libcurl4-gnutls-dev libfuse-dev libssl-dev libxml2-dev make pkg-config
sudo apt-get install automake autotools-dev fuse g++ git libcurl4-openssl-dev libfuse-dev libssl-dev libxml2-dev make pkg-config
```
On CentOS 7:
@ -47,34 +71,32 @@ make
sudo make install
```
* On Mac OS X, install via [Homebrew](http://brew.sh/):
```ShellSession
$ brew cask install osxfuse
$ brew install s3fs
```
Examples
--------
Enter your S3 identity and credential in a file `/path/to/passwd` and set
The default location for the s3fs password file can be created:
* using a .passwd-s3fs file in the users home directory (i.e. ~/.passwd-s3fs)
* using the system-wide /etc/passwd-s3fs file
Enter your S3 identity and credential in a file `~/.passwd-s3fs` and set
owner-only permissions:
```
echo MYIDENTITY:MYCREDENTIAL > /path/to/passwd
chmod 600 /path/to/passwd
echo MYIDENTITY:MYCREDENTIAL > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fs
```
Run s3fs with an existing bucket `mybucket` and directory `/path/to/mountpoint`:
```
s3fs mybucket /path/to/mountpoint -o passwd_file=/path/to/passwd
s3fs mybucket /path/to/mountpoint -o passwd_file=~/.passwd-s3fs
```
If you encounter any errors, enable debug output:
```
s3fs mybucket /path/to/mountpoint -o passwd_file=/path/to/passwd -o dbglevel=info -f -o curldbg
s3fs mybucket /path/to/mountpoint -o passwd_file=~/.passwd-s3fs -o dbglevel=info -f -o curldbg
```
You can also mount on boot by entering the following line to `/etc/fstab`:
@ -92,7 +114,7 @@ mybucket /path/to/mountpoint fuse.s3fs _netdev,allow_other 0 0
If you use s3fs with a non-Amazon S3 implementation, specify the URL and path-style requests:
```
s3fs mybucket /path/to/mountpoint -o passwd_file=/path/to/passwd -o url=http://url.to.s3/ -o use_path_request_style
s3fs mybucket /path/to/mountpoint -o passwd_file=~/.passwd-s3fs -o url=http://url.to.s3/ -o use_path_request_style
```
or(fstab)
@ -133,8 +155,7 @@ References
* [goofys](https://github.com/kahing/goofys) - similar to s3fs but has better performance and less POSIX compatibility
* [s3backer](https://github.com/archiecobbs/s3backer) - mount an S3 bucket as a single file
* [s3fs-python](https://fedorahosted.org/s3fs/) - an older and less complete implementation written in Python
* [S3Proxy](https://github.com/andrewgaul/s3proxy) - combine with s3fs to mount EMC Atmos, Microsoft Azure, and OpenStack Swift buckets
* [S3Proxy](https://github.com/gaul/s3proxy) - combine with s3fs to mount EMC Atmos, Microsoft Azure, and OpenStack Swift buckets
* [s3ql](https://bitbucket.org/nikratio/s3ql/) - similar to s3fs but uses its own object format
* [YAS3FS](https://github.com/danilop/yas3fs) - similar to s3fs but uses SNS to allow multiple clients to mount a bucket

View File

@ -20,7 +20,7 @@
dnl Process this file with autoconf to produce a configure script.
AC_PREREQ(2.59)
AC_INIT(s3fs, 1.83)
AC_INIT(s3fs, 1.84)
AC_CONFIG_HEADER([config.h])
AC_CANONICAL_SYSTEM
@ -39,6 +39,11 @@ dnl ----------------------------------------------
dnl For OSX
dnl ----------------------------------------------
case "$target" in
*-cygwin* )
# Do something specific for windows using winfsp
CXXFLAGS="$CXXFLAGS -D_GNU_SOURCE=1"
min_fuse_version=2.8
;;
*-darwin* )
# Do something specific for mac
min_fuse_version=2.7.3

View File

@ -62,7 +62,7 @@ the default canned acl to apply to all written s3 objects, e.g., "private", "pub
empty string means do not send header.
see http://aws.amazon.com/documentation/s3/ for the full list of canned acls.
.TP
\fB\-o\fR retries (default="2")
\fB\-o\fR retries (default="5")
number of times to retry a failed S3 transaction.
.TP
\fB\-o\fR use_cache (default="" which means disabled)
@ -143,7 +143,10 @@ time to wait for connection before giving up.
\fB\-o\fR readwrite_timeout (default="60" seconds)
time to wait between read/write activity before giving up.
.TP
\fB\-o\fR max_stat_cache_size (default="1000" entries (about 4MB))
\fB\-o\fR list_object_max_keys (default="1000")
specify the maximum number of keys returned by S3 list object API. The default is 1000. you can set this value to 1000 or more.
.TP
\fB\-o\fR max_stat_cache_size (default="100,000" entries (about 40MB))
maximum number of entries in the stat cache
.TP
\fB\-o\fR stat_cache_expire (default is no expire)
@ -183,7 +186,7 @@ number of one part size in multipart uploading request.
The default size is 10MB(10485760byte), minimum value is 5MB(5242880byte).
Specify number of MB and over 5(MB).
.TP
\fB\-o\fR ensure_diskfree(default the same as multipart_size value)
\fB\-o\fR ensure_diskfree(default 0)
sets MB to ensure disk free space. This option means the threshold of free space size on disk which is used for the cache file by s3fs.
s3fs makes file for downloading, and uploading and caching files.
If the disk free space is smaller than this value, s3fs do not use diskspace as possible in exchange for the performance.
@ -225,6 +228,9 @@ This option requires the IAM role name or "auto". If you specify "auto", s3fs wi
\fB\-o\fR ibm_iam_auth ( default is not using IBM IAM authentication )
This option instructs s3fs to use IBM IAM authentication. In this mode, the AWSAccessKey and AWSSecretKey will be used as IBM's Service-Instance-ID and APIKey, respectively.
.TP
\fB\-o\fR ibm_iam_endpoint ( default is https://iam.bluemix.net )
Set the URL to use for IBM IAM authentication.
.TP
\fB\-o\fR use_xattr ( default is not handling the extended attribute )
Enable to handle the extended attribute(xattrs).
If you set this option, you can use the extended attribute.
@ -256,6 +262,10 @@ Customize TLS cipher suite list. Expects a colon separated list of cipher suite
A list of available cipher suites, depending on your TLS engine, can be found on the CURL library documentation:
https://curl.haxx.se/docs/ssl-ciphers.html
.TP
\fB\-o\fR instance_name
The instance name of the current s3fs mountpoint.
This name will be added to logging messages and user agent headers sent by s3fs.
.TP
\fB\-o\fR complement_stat (complement lack of file/directory mode)
s3fs complements lack of information about file/directory mode if a file or a directory object does not have x-amz-meta-mode header.
As default, s3fs does not complements stat information for a object, then the object will not be able to be allowed to list/modify.

View File

@ -130,8 +130,8 @@ bool AdditionalHeader::Load(const char* file)
// compile
regex_t* preg = new regex_t;
int result;
char errbuf[256];
if(0 != (result = regcomp(preg, key.c_str(), REG_EXTENDED | REG_NOSUB))){ // we do not need matching info
char errbuf[256];
regerror(result, preg, errbuf, sizeof(errbuf));
S3FS_PRN_ERR("failed to compile regex from %s key by %s.", key.c_str(), errbuf);
delete preg;

View File

@ -142,7 +142,7 @@ pthread_mutex_t StatCache::stat_cache_lock;
//-------------------------------------------------------------------
// Constructor/Destructor
//-------------------------------------------------------------------
StatCache::StatCache() : IsExpireTime(false), IsExpireIntervalType(false), ExpireTime(0), CacheSize(1000), IsCacheNoObject(false)
StatCache::StatCache() : IsExpireTime(false), IsExpireIntervalType(false), ExpireTime(0), CacheSize(100000), IsCacheNoObject(false)
{
if(this == StatCache::getStatCacheData()){
stat_cache.clear();

View File

@ -21,6 +21,7 @@
#ifndef S3FS_COMMON_H_
#define S3FS_COMMON_H_
#include <stdlib.h>
#include "../config.h"
//
@ -79,7 +80,7 @@ enum s3fs_log_level{
if(foreground){ \
fprintf(stdout, "%s%s:%s(%d): " fmt "%s\n", S3FS_LOG_LEVEL_STRING(level), __FILE__, __func__, __LINE__, __VA_ARGS__); \
}else{ \
syslog(S3FS_LOG_LEVEL_TO_SYSLOG(level), "%s:%s(%d): " fmt "%s", __FILE__, __func__, __LINE__, __VA_ARGS__); \
syslog(S3FS_LOG_LEVEL_TO_SYSLOG(level), "%s%s:%s(%d): " fmt "%s", instance_name.c_str(), __FILE__, __func__, __LINE__, __VA_ARGS__); \
} \
}
@ -88,7 +89,7 @@ enum s3fs_log_level{
if(foreground){ \
fprintf(stdout, "%s%s%s:%s(%d): " fmt "%s\n", S3FS_LOG_LEVEL_STRING(level), S3FS_LOG_NEST(nest), __FILE__, __func__, __LINE__, __VA_ARGS__); \
}else{ \
syslog(S3FS_LOG_LEVEL_TO_SYSLOG(level), "%s" fmt "%s", S3FS_LOG_NEST(nest), __VA_ARGS__); \
syslog(S3FS_LOG_LEVEL_TO_SYSLOG(level), "%s%s" fmt "%s", instance_name.c_str(), S3FS_LOG_NEST(nest), __VA_ARGS__); \
} \
}
@ -97,7 +98,7 @@ enum s3fs_log_level{
fprintf(stderr, "s3fs: " fmt "%s\n", __VA_ARGS__); \
}else{ \
fprintf(stderr, "s3fs: " fmt "%s\n", __VA_ARGS__); \
syslog(S3FS_LOG_LEVEL_TO_SYSLOG(S3FS_LOG_CRIT), "s3fs: " fmt "%s", __VA_ARGS__); \
syslog(S3FS_LOG_LEVEL_TO_SYSLOG(S3FS_LOG_CRIT), "%ss3fs: " fmt "%s", instance_name.c_str(), __VA_ARGS__); \
}
// Special macro for init message
@ -105,7 +106,7 @@ enum s3fs_log_level{
if(foreground){ \
fprintf(stdout, "%s%s%s:%s(%d): " fmt "%s\n", S3FS_LOG_LEVEL_STRING(S3FS_LOG_INFO), S3FS_LOG_NEST(0), __FILE__, __func__, __LINE__, __VA_ARGS__, ""); \
}else{ \
syslog(S3FS_LOG_LEVEL_TO_SYSLOG(S3FS_LOG_INFO), "%s" fmt "%s", S3FS_LOG_NEST(0), __VA_ARGS__, ""); \
syslog(S3FS_LOG_LEVEL_TO_SYSLOG(S3FS_LOG_INFO), "%s%s" fmt "%s", instance_name.c_str(), S3FS_LOG_NEST(0), __VA_ARGS__, ""); \
}
// [NOTE]
@ -168,6 +169,7 @@ extern std::string bucket;
extern std::string mount_prefix;
extern std::string endpoint;
extern std::string cipher_suites;
extern std::string instance_name;
extern s3fs_log_level debug_level;
extern const char* s3fs_log_nest[S3FS_LOG_NEST_MAX];

View File

@ -226,8 +226,8 @@ bool BodyData::Append(void* ptr, size_t bytes)
const char* BodyData::str(void) const
{
static const char* strnull = "";
if(!text){
static const char* strnull = "";
return strnull;
}
return text;
@ -304,6 +304,7 @@ void CurlHandlerPool::ReturnHandler(CURL* h)
pthread_mutex_lock(&mLock);
if (mIndex < mMaxHandlers - 1) {
mHandlers[++mIndex] = h;
curl_easy_reset(h);
needCleanup = false;
S3FS_PRN_DBG("Return handler to pool: %d", mIndex);
}
@ -344,7 +345,7 @@ bool S3fsCurl::is_dns_cache = true; // default
bool S3fsCurl::is_ssl_session_cache= true; // default
long S3fsCurl::connect_timeout = 300; // default
time_t S3fsCurl::readwrite_timeout = 60; // default
int S3fsCurl::retries = 3; // default
int S3fsCurl::retries = 5; // default
bool S3fsCurl::is_public_bucket = false;
string S3fsCurl::default_acl = "private";
storage_class_t S3fsCurl::storage_class = STANDARD;
@ -395,16 +396,16 @@ bool S3fsCurl::InitS3fsCurl(const char* MimeFile)
if(!S3fsCurl::InitGlobalCurl()){
return false;
}
sCurlPool = new CurlHandlerPool(sCurlPoolSize);
if (!sCurlPool->Init()) {
return false;
}
if(!S3fsCurl::InitShareCurl()){
return false;
}
if(!S3fsCurl::InitCryptMutex()){
return false;
}
sCurlPool = new CurlHandlerPool(sCurlPoolSize);
if (!sCurlPool->Init()) {
return false;
}
return true;
}
@ -415,10 +416,12 @@ bool S3fsCurl::DestroyS3fsCurl(void)
if(!S3fsCurl::DestroyCryptMutex()){
result = false;
}
if(!S3fsCurl::DestroyShareCurl()){
if(!sCurlPool->Destroy()){
result = false;
}
if (!sCurlPool->Destroy()) {
delete sCurlPool;
sCurlPool = NULL;
if(!S3fsCurl::DestroyShareCurl()){
result = false;
}
if(!S3fsCurl::DestroyGlobalCurl()){
@ -631,6 +634,7 @@ void S3fsCurl::InitUserAgent(void)
S3fsCurl::userAgent += "; ";
S3fsCurl::userAgent += s3fs_crypt_lib_name();
S3fsCurl::userAgent += ")";
S3fsCurl::userAgent += instance_name;
}
}
@ -697,10 +701,8 @@ bool S3fsCurl::LocateBundle(void)
// See if environment variable CURL_CA_BUNDLE is set
// if so, check it, if it is a good path, then set the
// curl_ca_bundle variable to it
char *CURL_CA_BUNDLE;
if(0 == S3fsCurl::curl_ca_bundle.size()){
CURL_CA_BUNDLE = getenv("CURL_CA_BUNDLE");
char* CURL_CA_BUNDLE = getenv("CURL_CA_BUNDLE");
if(CURL_CA_BUNDLE != NULL) {
// check for existence and readability of the file
ifstream BF(CURL_CA_BUNDLE);
@ -1620,8 +1622,7 @@ int S3fsCurl::CurlDebugFunc(CURL* hcurl, curl_infotype type, char* data, size_t
break;
case CURLINFO_HEADER_IN:
case CURLINFO_HEADER_OUT:
size_t length, remaining;
int newline;
size_t remaining;
char* p;
// Print each line individually for tidy output
@ -1629,17 +1630,17 @@ int S3fsCurl::CurlDebugFunc(CURL* hcurl, curl_infotype type, char* data, size_t
p = data;
do {
char* eol = (char*)memchr(p, '\n', remaining);
newline = 0;
int newline = 0;
if (eol == NULL) {
eol = (char*)memchr(p, '\r', remaining);
} else if (eol > p && *(eol - 1) == '\r') {
newline++;
}
if (eol != NULL) {
} else {
if (eol > p && *(eol - 1) == '\r') {
newline++;
}
newline++;
eol++;
}
length = eol - p;
size_t length = eol - p;
S3FS_PRN_CURL("%c %.*s", CURLINFO_HEADER_IN == type ? '<' : '>', (int)length - newline, p);
remaining -= length;
p = eol;
@ -1727,7 +1728,7 @@ bool S3fsCurl::CreateCurlHandle(bool force)
S3FS_PRN_WARN("already create handle.");
return false;
}
if(!DestroyCurlHandle()){
if(!DestroyCurlHandle(true)){
S3FS_PRN_ERR("could not destroy handle.");
return false;
}
@ -1755,28 +1756,33 @@ bool S3fsCurl::CreateCurlHandle(bool force)
return true;
}
bool S3fsCurl::DestroyCurlHandle(void)
bool S3fsCurl::DestroyCurlHandle(bool force)
{
if(!hCurl){
return false;
}
pthread_mutex_lock(&S3fsCurl::curl_handles_lock);
S3fsCurl::curl_times.erase(hCurl);
S3fsCurl::curl_progress.erase(hCurl);
sCurlPool->ReturnHandler(hCurl);
hCurl = NULL;
ClearInternalData();
pthread_mutex_unlock(&S3fsCurl::curl_handles_lock);
if(hCurl){
pthread_mutex_lock(&S3fsCurl::curl_handles_lock);
S3fsCurl::curl_times.erase(hCurl);
S3fsCurl::curl_progress.erase(hCurl);
if(retry_count == 0 || force){
sCurlPool->ReturnHandler(hCurl);
}else{
curl_easy_cleanup(hCurl);
}
hCurl = NULL;
pthread_mutex_unlock(&S3fsCurl::curl_handles_lock);
}else{
return false;
}
return true;
}
bool S3fsCurl::ClearInternalData(void)
{
if(hCurl){
return false;
}
// Always clear internal data
//
type = REQTYPE_UNSET;
path = "";
base_path = "";
@ -1874,7 +1880,12 @@ bool S3fsCurl::RemakeHandle(void)
partdata.size = b_partdata_size;
// reset handle
curl_easy_cleanup(hCurl);
hCurl = curl_easy_init();
ResetHandle();
// disable ssl cache, so that a new session will be created
curl_easy_setopt(hCurl, CURLOPT_SSL_SESSIONID_CACHE, 0);
curl_easy_setopt(hCurl, CURLOPT_SHARE, NULL);
// set options
switch(type){
@ -2052,7 +2063,7 @@ int S3fsCurl::RequestPerform(void)
return 0;
}
if(500 <= LastResponseCode){
S3FS_PRN_INFO3("HTTP response code %ld", LastResponseCode);
S3FS_PRN_ERR("HTTP response code = %ld Body Text: %s", LastResponseCode, (bodydata ? bodydata->str() : ""));
sleep(4);
break;
}
@ -2060,13 +2071,11 @@ int S3fsCurl::RequestPerform(void)
// Service response codes which are >= 400 && < 500
switch(LastResponseCode){
case 400:
S3FS_PRN_INFO3("HTTP response code 400 was returned, returning EIO.");
S3FS_PRN_DBG("Body Text: %s", (bodydata ? bodydata->str() : ""));
S3FS_PRN_ERR("HTTP response code %ld, returning EIO. Body Text: %s", LastResponseCode, (bodydata ? bodydata->str() : ""));
return -EIO;
case 403:
S3FS_PRN_INFO3("HTTP response code 403 was returned, returning EPERM");
S3FS_PRN_DBG("Body Text: %s", (bodydata ? bodydata->str() : ""));
S3FS_PRN_ERR("HTTP response code %ld, returning EPERM. Body Text: %s", LastResponseCode, (bodydata ? bodydata->str() : ""));
return -EPERM;
case 404:
@ -2075,8 +2084,7 @@ int S3fsCurl::RequestPerform(void)
return -ENOENT;
default:
S3FS_PRN_INFO3("HTTP response code = %ld, returning EIO", LastResponseCode);
S3FS_PRN_DBG("Body Text: %s", (bodydata ? bodydata->str() : ""));
S3FS_PRN_ERR("HTTP response code %ld, returning EIO. Body Text: %s", LastResponseCode, (bodydata ? bodydata->str() : ""));
return -EIO;
}
break;
@ -2712,10 +2720,10 @@ int S3fsCurl::HeadRequest(const char* tpath, headers_t& meta)
// If has SSE-C keys, try to get with all SSE-C keys.
for(int pos = 0; static_cast<size_t>(pos) < S3fsCurl::sseckeys.size(); pos++){
if(!DestroyCurlHandle()){
return result;
break;
}
if(!PreHeadRequest(tpath, NULL, NULL, pos)){
return result;
break;
}
if(0 == (result = RequestPerform())){
break;
@ -2867,7 +2875,6 @@ int S3fsCurl::PutRequest(const char* tpath, headers_t& meta, int fd)
{
struct stat st;
FILE* file = NULL;
int fd2;
S3FS_PRN_INFO3("[tpath=%s]", SAFESTRPTR(tpath));
@ -2876,6 +2883,7 @@ int S3fsCurl::PutRequest(const char* tpath, headers_t& meta, int fd)
}
if(-1 != fd){
// duplicate fd
int fd2;
if(-1 == (fd2 = dup(fd)) || -1 == fstat(fd2, &st) || 0 != lseek(fd2, 0, SEEK_SET) || NULL == (file = fdopen(fd2, "rb"))){
S3FS_PRN_ERR("Could not duplicate file descriptor(errno=%d)", errno);
if(-1 != fd2){
@ -2980,7 +2988,7 @@ int S3fsCurl::PutRequest(const char* tpath, headers_t& meta, int fd)
int S3fsCurl::PreGetObjectRequest(const char* tpath, int fd, off_t start, ssize_t size, sse_type_t ssetype, string& ssevalue)
{
S3FS_PRN_INFO3("[tpath=%s][start=%jd][size=%zd]", SAFESTRPTR(tpath), (intmax_t)start, size);
S3FS_PRN_INFO3("[tpath=%s][start=%jd][size=%jd]", SAFESTRPTR(tpath), (intmax_t)start, (intmax_t)size);
if(!tpath || -1 == fd || 0 > start || 0 > size){
return -1;
@ -3040,7 +3048,7 @@ int S3fsCurl::GetObjectRequest(const char* tpath, int fd, off_t start, ssize_t s
{
int result;
S3FS_PRN_INFO3("[tpath=%s][start=%jd][size=%zd]", SAFESTRPTR(tpath), (intmax_t)start, size);
S3FS_PRN_INFO3("[tpath=%s][start=%jd][size=%jd]", SAFESTRPTR(tpath), (intmax_t)start, (intmax_t)size);
if(!tpath){
return -1;
@ -3418,7 +3426,7 @@ int S3fsCurl::AbortMultipartUpload(const char* tpath, string& upload_id)
int S3fsCurl::UploadMultipartPostSetup(const char* tpath, int part_num, const string& upload_id)
{
S3FS_PRN_INFO3("[tpath=%s][start=%jd][size=%zd][part=%d]", SAFESTRPTR(tpath), (intmax_t)(partdata.startpos), partdata.size, part_num);
S3FS_PRN_INFO3("[tpath=%s][start=%jd][size=%jd][part=%d]", SAFESTRPTR(tpath), (intmax_t)(partdata.startpos), (intmax_t)(partdata.size), part_num);
if(-1 == partdata.fd || -1 == partdata.startpos || -1 == partdata.size){
return -1;
@ -3492,7 +3500,7 @@ int S3fsCurl::UploadMultipartPostRequest(const char* tpath, int part_num, const
{
int result;
S3FS_PRN_INFO3("[tpath=%s][start=%jd][size=%zd][part=%d]", SAFESTRPTR(tpath), (intmax_t)(partdata.startpos), partdata.size, part_num);
S3FS_PRN_INFO3("[tpath=%s][start=%jd][size=%jd][part=%d]", SAFESTRPTR(tpath), (intmax_t)(partdata.startpos), (intmax_t)(partdata.size), part_num);
// setup
if(0 != (result = S3fsCurl::UploadMultipartPostSetup(tpath, part_num, upload_id))){
@ -3893,12 +3901,15 @@ int S3fsMultiCurl::MultiPerform(void)
{
std::vector<pthread_t> threads;
bool success = true;
bool isMultiHead = false;
for(s3fscurlmap_t::iterator iter = cMap_req.begin(); iter != cMap_req.end(); ++iter) {
pthread_t thread;
S3fsCurl* s3fscurl = (*iter).second;
int rc;
isMultiHead |= s3fscurl->GetOp() == "HEAD";
rc = pthread_create(&thread, NULL, S3fsMultiCurl::RequestPerformWrapper, static_cast<void*>(s3fscurl));
if (rc != 0) {
success = false;
@ -3919,9 +3930,8 @@ int S3fsMultiCurl::MultiPerform(void)
S3FS_PRN_ERR("failed pthread_join - rc(%d)", rc);
} else {
int int_retval = (int)(intptr_t)(retval);
if (int_retval) {
S3FS_PRN_ERR("thread failed - rc(%d)", int_retval);
success = false;
if (int_retval && !(int_retval == ENOENT && isMultiHead)) {
S3FS_PRN_WARN("thread failed - rc(%d)", int_retval);
}
}
}
@ -3949,7 +3959,10 @@ int S3fsMultiCurl::MultiRead(void)
isRetry = true;
}else if(404 == responseCode){
// not found
S3FS_PRN_WARN("failed a request(%ld: %s)", responseCode, s3fscurl->url.c_str());
// HEAD requests on readdir_multi_head can return 404
if(s3fscurl->GetOp() != "HEAD"){
S3FS_PRN_WARN("failed a request(%ld: %s)", responseCode, s3fscurl->url.c_str());
}
}else if(500 == responseCode){
// case of all other result, do retry.(11/13/2013)
// because it was found that s3fs got 500 error from S3, but could success
@ -3994,8 +4007,6 @@ int S3fsMultiCurl::MultiRead(void)
int S3fsMultiCurl::Request(void)
{
int result;
S3FS_PRN_INFO3("[count=%zu]", cMap_all.size());
// Make request list.
@ -4005,6 +4016,7 @@ int S3fsMultiCurl::Request(void)
//
while(!cMap_all.empty()){
// set curl handle to multi handle
int result;
int cnt;
s3fscurlmap_t::iterator iter;
for(cnt = 0, iter = cMap_all.begin(); cnt < S3fsMultiCurl::max_multireq && iter != cMap_all.end(); cMap_all.erase(iter++), cnt++){
@ -4071,7 +4083,7 @@ struct curl_slist* curl_slist_sort_insert(struct curl_slist* list, const char* k
if(!key){
return list;
}
if(NULL == (new_item = (struct curl_slist*)malloc(sizeof(struct curl_slist)))){
if(NULL == (new_item = reinterpret_cast<struct curl_slist*>(malloc(sizeof(struct curl_slist))))){
return list;
}

View File

@ -405,7 +405,7 @@ class S3fsCurl
// methods
bool CreateCurlHandle(bool force = false);
bool DestroyCurlHandle(void);
bool DestroyCurlHandle(bool force = false);
bool LoadIAMRoleFromMetaData(void);
bool AddSseRequestHead(sse_type_t ssetype, std::string& ssevalue, bool is_only_c, bool is_copy);
@ -439,6 +439,7 @@ class S3fsCurl
std::string GetBasePath(void) const { return base_path; }
std::string GetSpacialSavedPath(void) const { return saved_path; }
std::string GetUrl(void) const { return url; }
std::string GetOp(void) const { return op; }
headers_t* GetResponseHeaders(void) { return &responseHeaders; }
BodyData* GetBodyData(void) const { return bodydata; }
BodyData* GetHeadData(void) const { return headdata; }

View File

@ -725,15 +725,12 @@ void FdEntity::Close(void)
}
}
int FdEntity::Dup(bool no_fd_lock_wait)
int FdEntity::Dup()
{
S3FS_PRN_DBG("[path=%s][fd=%d][refcnt=%d]", path.c_str(), fd, (-1 != fd ? refcnt + 1 : refcnt));
if(-1 != fd){
AutoLock auto_lock(&fdent_lock, no_fd_lock_wait);
if (!auto_lock.isLockAcquired()) {
return -1;
}
AutoLock auto_lock(&fdent_lock);
refcnt++;
}
return fd;
@ -756,13 +753,23 @@ int FdEntity::OpenMirrorFile(void)
return -EIO;
}
// create seed generating mirror file name
unsigned int seed = static_cast<unsigned int>(time(NULL));
int urandom_fd;
if(-1 != (urandom_fd = open("/dev/urandom", O_RDONLY))){
unsigned int rand_data;
if(sizeof(rand_data) == read(urandom_fd, &rand_data, sizeof(rand_data))){
seed ^= rand_data;
}
close(urandom_fd);
}
// try to link mirror file
while(true){
// make random(temp) file path
// (do not care for threading, because allowed any value returned.)
//
char szfile[NAME_MAX + 1];
unsigned int seed = static_cast<unsigned int>(time(NULL));
sprintf(szfile, "%x.tmp", rand_r(&seed));
mirrorpath = bupdir + "/" + szfile;
@ -774,6 +781,7 @@ int FdEntity::OpenMirrorFile(void)
S3FS_PRN_ERR("could not link mirror file(%s) to cache file(%s) by errno(%d).", mirrorpath.c_str(), cachepath.c_str(), errno);
return -errno;
}
++seed;
}
// open mirror file
@ -785,20 +793,19 @@ int FdEntity::OpenMirrorFile(void)
return mirrorfd;
}
// [NOTE]
// This method does not lock fdent_lock, because FdManager::fd_manager_lock
// is locked before calling.
//
int FdEntity::Open(headers_t* pmeta, ssize_t size, time_t time, bool no_fd_lock_wait)
{
S3FS_PRN_DBG("[path=%s][fd=%d][size=%jd][time=%jd]", path.c_str(), fd, (intmax_t)size, (intmax_t)time);
AutoLock auto_lock(&fdent_lock, no_fd_lock_wait);
if (!auto_lock.isLockAcquired()) {
// had to wait for fd lock, return
return -EIO;
}
if(-1 != fd){
// already opened, needs to increment refcnt.
if (fd != Dup(no_fd_lock_wait)) {
// had to wait for fd lock, return
return -EIO;
}
Dup();
// check only file size(do not need to save cfs and time.
if(0 <= size && pagelist.Size() != static_cast<size_t>(size)){
@ -1429,7 +1436,7 @@ int FdEntity::NoCacheCompleteMultipartPost(void)
int FdEntity::RowFlush(const char* tpath, bool force_sync)
{
int result;
int result = 0;
S3FS_PRN_INFO3("[tpath=%s][path=%s][fd=%d]", SAFESTRPTR(tpath), path.c_str(), fd);
@ -1448,10 +1455,12 @@ int FdEntity::RowFlush(const char* tpath, bool force_sync)
if(0 < restsize){
if(0 == upload_id.length()){
// check disk space
if(FdManager::IsSafeDiskSpace(NULL, restsize)){
if(ReserveDiskSpace(restsize)){
// enough disk space
// Load all uninitialized area
if(0 != (result = Load())){
result = Load();
FdManager::get()->FreeReservedDiskSpace(restsize);
if(0 != result){
S3FS_PRN_ERR("failed to upload all area(errno=%d)", result);
return static_cast<ssize_t>(result);
}
@ -1554,6 +1563,32 @@ int FdEntity::RowFlush(const char* tpath, bool force_sync)
return result;
}
// [NOTICE]
// Need to lock before calling this method.
bool FdEntity::ReserveDiskSpace(size_t size)
{
if(FdManager::get()->ReserveDiskSpace(size)){
return true;
}
if(!is_modify){
// try to clear all cache for this fd.
pagelist.Init(pagelist.Size(), false);
if(-1 == ftruncate(fd, 0) || -1 == ftruncate(fd, pagelist.Size())){
S3FS_PRN_ERR("failed to truncate temporary file(%d).", fd);
return false;
}
if(FdManager::get()->ReserveDiskSpace(size)){
return true;
}
}
FdManager::get()->CleanupCacheDir();
return FdManager::get()->ReserveDiskSpace(size);
}
ssize_t FdEntity::Read(char* bytes, off_t start, size_t size, bool force_load)
{
S3FS_PRN_DBG("[path=%s][fd=%d][offset=%jd][size=%zu]", path.c_str(), fd, (intmax_t)start, size);
@ -1561,38 +1596,16 @@ ssize_t FdEntity::Read(char* bytes, off_t start, size_t size, bool force_load)
if(-1 == fd){
return -EBADF;
}
// check if not enough disk space left BEFORE locking fd
if(FdManager::IsCacheDir() && !FdManager::IsSafeDiskSpace(NULL, size)){
FdManager::get()->CleanupCacheDir();
}
AutoLock auto_lock(&fdent_lock);
if(force_load){
pagelist.SetPageLoadedStatus(start, size, false);
}
int result;
ssize_t rsize;
// check disk space
if(0 < pagelist.GetTotalUnloadedPageSize(start, size)){
if(!FdManager::IsSafeDiskSpace(NULL, size)){
// [NOTE]
// If the area of this entity fd used can be released, try to do it.
// But If file data is updated, we can not even release of fd.
// Fundamentally, this method will fail as long as the disk capacity
// is not ensured.
//
if(!is_modify){
// try to clear all cache for this fd.
pagelist.Init(pagelist.Size(), false);
if(-1 == ftruncate(fd, 0) || -1 == ftruncate(fd, pagelist.Size())){
S3FS_PRN_ERR("failed to truncate temporary file(%d).", fd);
return -ENOSPC;
}
}
}
// load size(for prefetch)
size_t load_size = size;
if(static_cast<size_t>(start + size) < pagelist.Size()){
@ -1604,8 +1617,25 @@ ssize_t FdEntity::Read(char* bytes, off_t start, size_t size, bool force_load)
load_size = static_cast<size_t>(pagelist.Size() - start);
}
}
if(!ReserveDiskSpace(load_size)){
S3FS_PRN_WARN("could not reserve disk space for pre-fetch download");
load_size = size;
if(!ReserveDiskSpace(load_size)){
S3FS_PRN_ERR("could not reserve disk space for pre-fetch download");
return -ENOSPC;
}
}
// Loading
if(0 < size && 0 != (result = Load(start, load_size))){
int result = 0;
if(0 < size){
result = Load(start, load_size);
}
FdManager::get()->FreeReservedDiskSpace(load_size);
if(0 != result){
S3FS_PRN_ERR("could not download. start(%jd), size(%zu), errno(%d)", (intmax_t)start, size, result);
return -EIO;
}
@ -1642,17 +1672,21 @@ ssize_t FdEntity::Write(const char* bytes, off_t start, size_t size)
pagelist.SetPageLoadedStatus(static_cast<off_t>(pagelist.Size()), static_cast<size_t>(start) - pagelist.Size(), false);
}
int result;
int result = 0;
ssize_t wsize;
if(0 == upload_id.length()){
// check disk space
size_t restsize = pagelist.GetTotalUnloadedPageSize(0, start) + size;
if(FdManager::IsSafeDiskSpace(NULL, restsize)){
if(ReserveDiskSpace(restsize)){
// enough disk space
// Load uninitialized area which starts from 0 to (start + size) before writing.
if(0 < start && 0 != (result = Load(0, static_cast<size_t>(start)))){
if(0 < start){
result = Load(0, static_cast<size_t>(start));
}
FdManager::get()->FreeReservedDiskSpace(restsize);
if(0 != result){
S3FS_PRN_ERR("failed to load uninitialized area before writing(errno=%d)", result);
return static_cast<ssize_t>(result);
}
@ -1750,6 +1784,7 @@ void FdEntity::CleanupCache()
FdManager FdManager::singleton;
pthread_mutex_t FdManager::fd_manager_lock;
pthread_mutex_t FdManager::cache_cleanup_lock;
pthread_mutex_t FdManager::reserved_diskspace_lock;
bool FdManager::is_lock_init(false);
string FdManager::cache_dir("");
bool FdManager::check_cache_dir_exist(false);
@ -1901,19 +1936,7 @@ bool FdManager::CheckCacheDirExist(void)
size_t FdManager::SetEnsureFreeDiskSpace(size_t size)
{
size_t old = FdManager::free_disk_space;
if(0 == size){
if(0 == FdManager::free_disk_space){
FdManager::free_disk_space = static_cast<size_t>(S3fsCurl::GetMultipartSize() * S3fsCurl::GetMaxParallelCount());
}
}else{
if(0 == FdManager::free_disk_space){
FdManager::free_disk_space = max(size, static_cast<size_t>(S3fsCurl::GetMultipartSize() * S3fsCurl::GetMaxParallelCount()));
}else{
if(static_cast<size_t>(S3fsCurl::GetMultipartSize() * S3fsCurl::GetMaxParallelCount()) <= size){
FdManager::free_disk_space = size;
}
}
}
FdManager::free_disk_space = size;
return old;
}
@ -1957,6 +1980,7 @@ FdManager::FdManager()
try{
pthread_mutex_init(&FdManager::fd_manager_lock, NULL);
pthread_mutex_init(&FdManager::cache_cleanup_lock, NULL);
pthread_mutex_init(&FdManager::reserved_diskspace_lock, NULL);
FdManager::is_lock_init = true;
}catch(exception& e){
FdManager::is_lock_init = false;
@ -1980,6 +2004,7 @@ FdManager::~FdManager()
try{
pthread_mutex_destroy(&FdManager::fd_manager_lock);
pthread_mutex_destroy(&FdManager::cache_cleanup_lock);
pthread_mutex_destroy(&FdManager::reserved_diskspace_lock);
}catch(exception& e){
S3FS_PRN_CRIT("failed to init mutex");
}
@ -2027,56 +2052,58 @@ FdEntity* FdManager::Open(const char* path, headers_t* pmeta, ssize_t size, time
if(!path || '\0' == path[0]){
return NULL;
}
AutoLock auto_lock(&FdManager::fd_manager_lock);
FdEntity* ent;
{
AutoLock auto_lock(&FdManager::fd_manager_lock);
// search in mapping by key(path)
fdent_map_t::iterator iter = fent.find(string(path));
// search in mapping by key(path)
fdent_map_t::iterator iter = fent.find(string(path));
if(fent.end() == iter && !force_tmpfile && !FdManager::IsCacheDir()){
// If the cache directory is not specified, s3fs opens a temporary file
// when the file is opened.
// Then if it could not find a entity in map for the file, s3fs should
// search a entity in all which opened the temporary file.
//
for(iter = fent.begin(); iter != fent.end(); ++iter){
if((*iter).second && (*iter).second->IsOpen() && 0 == strcmp((*iter).second->GetPath(), path)){
break; // found opened fd in mapping
if(fent.end() == iter && !force_tmpfile && !FdManager::IsCacheDir()){
// If the cache directory is not specified, s3fs opens a temporary file
// when the file is opened.
// Then if it could not find a entity in map for the file, s3fs should
// search a entity in all which opened the temporary file.
//
for(iter = fent.begin(); iter != fent.end(); ++iter){
if((*iter).second && (*iter).second->IsOpen() && 0 == strcmp((*iter).second->GetPath(), path)){
break; // found opened fd in mapping
}
}
}
}
FdEntity* ent;
if(fent.end() != iter){
// found
ent = (*iter).second;
if(fent.end() != iter){
// found
ent = (*iter).second;
}else if(is_create){
// not found
string cache_path = "";
if(!force_tmpfile && !FdManager::MakeCachePath(path, cache_path, true)){
S3FS_PRN_ERR("failed to make cache path for object(%s).", path);
}else if(is_create){
// not found
string cache_path = "";
if(!force_tmpfile && !FdManager::MakeCachePath(path, cache_path, true)){
S3FS_PRN_ERR("failed to make cache path for object(%s).", path);
return NULL;
}
// make new obj
ent = new FdEntity(path, cache_path.c_str());
if(0 < cache_path.size()){
// using cache
fent[string(path)] = ent;
}else{
// not using cache, so the key of fdentity is set not really existing path.
// (but not strictly unexisting path.)
//
// [NOTE]
// The reason why this process here, please look at the definition of the
// comments of NOCACHE_PATH_PREFIX_FORM symbol.
//
string tmppath("");
FdManager::MakeRandomTempPath(path, tmppath);
fent[tmppath] = ent;
}
}else{
return NULL;
}
// make new obj
ent = new FdEntity(path, cache_path.c_str());
if(0 < cache_path.size()){
// using cache
fent[string(path)] = ent;
}else{
// not using cache, so the key of fdentity is set not really existing path.
// (but not strictly unexisting path.)
//
// [NOTE]
// The reason why this process here, please look at the definition of the
// comments of NOCACHE_PATH_PREFIX_FORM symbol.
//
string tmppath("");
FdManager::MakeRandomTempPath(path, tmppath);
fent[tmppath] = ent;
}
}else{
return NULL;
}
// open
@ -2181,17 +2208,22 @@ bool FdManager::ChangeEntityToTempPath(FdEntity* ent, const char* path)
void FdManager::CleanupCacheDir()
{
if (!FdManager::IsCacheDir()) {
S3FS_PRN_INFO("cache cleanup requested");
if(!FdManager::IsCacheDir()){
return;
}
AutoLock auto_lock(&FdManager::cache_cleanup_lock, true);
AutoLock auto_lock_no_wait(&FdManager::cache_cleanup_lock, true);
if (!auto_lock.isLockAcquired()) {
return;
if(auto_lock_no_wait.isLockAcquired()){
S3FS_PRN_INFO("cache cleanup started");
CleanupCacheDirInternal("");
S3FS_PRN_INFO("cache cleanup ended");
}else{
// wait for other thread to finish cache cleanup
AutoLock auto_lock(&FdManager::cache_cleanup_lock);
}
CleanupCacheDirInternal("");
}
void FdManager::CleanupCacheDirInternal(const std::string &path)
@ -2224,16 +2256,38 @@ void FdManager::CleanupCacheDirInternal(const std::string &path)
}else{
FdEntity* ent;
if(NULL == (ent = FdManager::get()->Open(next_path.c_str(), NULL, -1, -1, false, true, true))){
S3FS_PRN_DBG("skipping locked file: %s", next_path.c_str());
continue;
}
ent->CleanupCache();
if(ent->IsMultiOpened()){
S3FS_PRN_DBG("skipping opened file: %s", next_path.c_str());
}else{
ent->CleanupCache();
S3FS_PRN_DBG("cleaned up: %s", next_path.c_str());
}
Close(ent);
}
}
closedir(dp);
}
bool FdManager::ReserveDiskSpace(size_t size)
{
AutoLock auto_lock(&FdManager::reserved_diskspace_lock);
if(IsSafeDiskSpace(NULL, size)){
free_disk_space += size;
return true;
}
return false;
}
void FdManager::FreeReservedDiskSpace(size_t size)
{
AutoLock auto_lock(&FdManager::reserved_diskspace_lock);
free_disk_space -= size;
}
/*
* Local variables:
* tab-width: 4

View File

@ -144,9 +144,10 @@ class FdEntity
void Close(void);
bool IsOpen(void) const { return (-1 != fd); }
bool IsMultiOpened(void) const { return refcnt > 1; }
int Open(headers_t* pmeta = NULL, ssize_t size = -1, time_t time = -1, bool no_fd_lock_wait = false);
bool OpenAndLoadAll(headers_t* pmeta = NULL, size_t* size = NULL, bool force_load = false);
int Dup(bool no_fd_lock_wait = false);
int Dup();
const char* GetPath(void) const { return path.c_str(); }
void SetPath(const std::string &newpath) { path = newpath; }
@ -173,6 +174,7 @@ class FdEntity
ssize_t Read(char* bytes, off_t start, size_t size, bool force_load = false);
ssize_t Write(const char* bytes, off_t start, size_t size);
bool ReserveDiskSpace(size_t size);
void CleanupCache();
};
typedef std::map<std::string, class FdEntity*> fdent_map_t; // key=path, value=FdEntity*
@ -186,6 +188,7 @@ class FdManager
static FdManager singleton;
static pthread_mutex_t fd_manager_lock;
static pthread_mutex_t cache_cleanup_lock;
static pthread_mutex_t reserved_diskspace_lock;
static bool is_lock_init;
static std::string cache_dir;
static bool check_cache_dir_exist;
@ -217,8 +220,9 @@ class FdManager
static size_t GetEnsureFreeDiskSpace(void) { return FdManager::free_disk_space; }
static size_t SetEnsureFreeDiskSpace(size_t size);
static size_t InitEnsureFreeDiskSpace(void) { return SetEnsureFreeDiskSpace(0); }
static bool IsSafeDiskSpace(const char* path, size_t size);
static void FreeReservedDiskSpace(size_t size);
bool ReserveDiskSpace(size_t size);
FdEntity* GetFdEntity(const char* path, int existfd = -1);
FdEntity* Open(const char* path, headers_t* pmeta = NULL, ssize_t size = -1, time_t time = -1, bool force_tmpfile = false, bool is_create = true, bool no_fd_lock_wait = false);

View File

@ -74,6 +74,11 @@ bool s3fs_init_global_ssl(void)
if(GNUTLS_E_SUCCESS != gnutls_global_init()){
return false;
}
#ifndef USE_GNUTLS_NETTLE
if(NULL == gcry_check_version(NULL)){
return false;
}
#endif // USE_GNUTLS_NETTLE
return true;
}
@ -107,7 +112,7 @@ bool s3fs_HMAC(const void* key, size_t keylen, const unsigned char* data, size_t
return false;
}
if(NULL == (*digest = (unsigned char*)malloc(SHA1_DIGEST_SIZE))){
if(NULL == (*digest = reinterpret_cast<unsigned char*>(malloc(SHA1_DIGEST_SIZE)))){
return false;
}
@ -126,7 +131,7 @@ bool s3fs_HMAC256(const void* key, size_t keylen, const unsigned char* data, siz
return false;
}
if(NULL == (*digest = (unsigned char*)malloc(SHA256_DIGEST_SIZE))){
if(NULL == (*digest = reinterpret_cast<unsigned char*>(malloc(SHA256_DIGEST_SIZE)))){
return false;
}
@ -150,7 +155,7 @@ bool s3fs_HMAC(const void* key, size_t keylen, const unsigned char* data, size_t
if(0 == (*digestlen = gnutls_hmac_get_len(GNUTLS_MAC_SHA1))){
return false;
}
if(NULL == (*digest = (unsigned char*)malloc(*digestlen + 1))){
if(NULL == (*digest = reinterpret_cast<unsigned char*>(malloc(*digestlen + 1)))){
return false;
}
if(0 > gnutls_hmac_fast(GNUTLS_MAC_SHA1, key, keylen, data, datalen, *digest)){
@ -170,7 +175,7 @@ bool s3fs_HMAC256(const void* key, size_t keylen, const unsigned char* data, siz
if(0 == (*digestlen = gnutls_hmac_get_len(GNUTLS_MAC_SHA256))){
return false;
}
if(NULL == (*digest = (unsigned char*)malloc(*digestlen + 1))){
if(NULL == (*digest = reinterpret_cast<unsigned char*>(malloc(*digestlen + 1)))){
return false;
}
if(0 > gnutls_hmac_fast(GNUTLS_MAC_SHA256, key, keylen, data, datalen, *digest)){
@ -221,7 +226,7 @@ unsigned char* s3fs_md5hexsum(int fd, off_t start, ssize_t size)
md5_update(&ctx_md5, bytes, buf);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_md5_digest_length()))){
if(NULL == (result = reinterpret_cast<unsigned char*>(malloc(get_md5_digest_length())))){
return NULL;
}
md5_digest(&ctx_md5, get_md5_digest_length(), result);
@ -272,12 +277,14 @@ unsigned char* s3fs_md5hexsum(int fd, off_t start, ssize_t size)
}else if(-1 == bytes){
// error
S3FS_PRN_ERR("file read error(%d)", errno);
gcry_md_close(ctx_md5);
return NULL;
}
gcry_md_write(ctx_md5, buf, bytes);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_md5_digest_length()))){
if(NULL == (result = reinterpret_cast<unsigned char*>(malloc(get_md5_digest_length())))){
gcry_md_close(ctx_md5);
return NULL;
}
memcpy(result, gcry_md_read(ctx_md5, 0), get_md5_digest_length());
@ -346,7 +353,7 @@ unsigned char* s3fs_sha256hexsum(int fd, off_t start, ssize_t size)
sha256_update(&ctx_sha256, bytes, buf);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_sha256_digest_length()))){
if(NULL == (result = reinterpret_cast<unsigned char*>(malloc(get_sha256_digest_length())))){
return NULL;
}
sha256_digest(&ctx_sha256, get_sha256_digest_length(), result);
@ -418,12 +425,14 @@ unsigned char* s3fs_sha256hexsum(int fd, off_t start, ssize_t size)
}else if(-1 == bytes){
// error
S3FS_PRN_ERR("file read error(%d)", errno);
gcry_md_close(ctx_sha256);
return NULL;
}
gcry_md_write(ctx_sha256, buf, bytes);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_sha256_digest_length()))){
if(NULL == (result = reinterpret_cast<unsigned char*>(malloc(get_sha256_digest_length())))){
gcry_md_close(ctx_sha256);
return NULL;
}
memcpy(result, gcry_md_read(ctx_sha256, 0), get_sha256_digest_length());

View File

@ -54,8 +54,12 @@ const char* s3fs_crypt_lib_name(void)
//-------------------------------------------------------------------
bool s3fs_init_global_ssl(void)
{
NSS_Init(NULL);
NSS_NoDB_Init(NULL);
PR_Init(PR_USER_THREAD, PR_PRIORITY_NORMAL, 0);
if(SECSuccess != NSS_NoDB_Init(NULL)){
S3FS_PRN_ERR("Failed NSS_NoDB_Init call.");
return false;
}
return true;
}
@ -124,7 +128,7 @@ static bool s3fs_HMAC_RAW(const void* key, size_t keylen, const unsigned char* d
PK11_FreeSymKey(pKey);
PK11_FreeSlot(Slot);
if(NULL == (*digest = (unsigned char*)malloc(*digestlen))){
if(NULL == (*digest = reinterpret_cast<unsigned char*>(malloc(*digestlen)))){
return false;
}
memcpy(*digest, tmpdigest, *digestlen);
@ -183,12 +187,13 @@ unsigned char* s3fs_md5hexsum(int fd, off_t start, ssize_t size)
}else if(-1 == bytes){
// error
S3FS_PRN_ERR("file read error(%d)", errno);
PK11_DestroyContext(md5ctx, PR_TRUE);
return NULL;
}
PK11_DigestOp(md5ctx, buf, bytes);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_md5_digest_length()))){
if(NULL == (result = reinterpret_cast<unsigned char*>(malloc(get_md5_digest_length())))){
PK11_DestroyContext(md5ctx, PR_TRUE);
return NULL;
}
@ -269,7 +274,7 @@ unsigned char* s3fs_sha256hexsum(int fd, off_t start, ssize_t size)
PK11_DigestOp(sha256ctx, buf, bytes);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_sha256_digest_length()))){
if(NULL == (result = reinterpret_cast<unsigned char*>(malloc(get_sha256_digest_length())))){
PK11_DestroyContext(sha256ctx, PR_TRUE);
return NULL;
}

View File

@ -100,6 +100,7 @@ std::string host = "https://s3.amazonaws.com";
std::string bucket = "";
std::string endpoint = "us-east-1";
std::string cipher_suites = "";
std::string instance_name = "";
s3fs_log_level debug_level = S3FS_LOG_CRIT;
const char* s3fs_log_nest[S3FS_LOG_NEST_MAX] = {"", " ", " ", " "};
@ -135,9 +136,10 @@ static int64_t singlepart_copy_limit = FIVE_GB;
static bool is_specified_endpoint = false;
static int s3fs_init_deferred_exit_status = 0;
static bool support_compat_dir = true;// default supports compatibility directory type
static int max_keys_list_object = 1000;// default is 1000
static const std::string allbucket_fields_type = ""; // special key for mapping(This name is absolutely not used as a bucket name)
static const std::string keyval_fields_type = "\t"; // special key for mapping(This name is absolutely not used as a bucket name)
static const std::string allbucket_fields_type = ""; // special key for mapping(This name is absolutely not used as a bucket name)
static const std::string keyval_fields_type = "\t"; // special key for mapping(This name is absolutely not used as a bucket name)
static const std::string aws_accesskeyid = "AWSAccessKeyId";
static const std::string aws_secretkey = "AWSSecretKey";
@ -892,7 +894,7 @@ static int s3fs_readlink(const char* path, char* buf, size_t size)
// Read
ssize_t ressize;
if(0 > (ressize = ent->Read(buf, 0, readsize))){
S3FS_PRN_ERR("could not read file(file=%s, errno=%zd)", path, ressize);
S3FS_PRN_ERR("could not read file(file=%s, ressize=%jd)", path, (intmax_t)ressize);
FdManager::get()->Close(ent);
return static_cast<int>(ressize);
}
@ -2138,7 +2140,7 @@ static int s3fs_read(const char* path, char* buf, size_t size, off_t offset, str
}
if(0 > (res = ent->Read(buf, offset, size, false))){
S3FS_PRN_WARN("failed to read file(%s). result=%zd", path, res);
S3FS_PRN_WARN("failed to read file(%s). result=%jd", path, (intmax_t)res);
}
FdManager::get()->Close(ent);
@ -2160,7 +2162,7 @@ static int s3fs_write(const char* path, const char* buf, size_t size, off_t offs
S3FS_PRN_WARN("different fd(%d - %llu)", ent->GetFd(), (unsigned long long)(fi->fh));
}
if(0 > (res = ent->Write(buf, offset, size))){
S3FS_PRN_WARN("failed to write file(%s). result=%zd", path, res);
S3FS_PRN_WARN("failed to write file(%s). result=%jd", path, (intmax_t)res);
}
FdManager::get()->Close(ent);
@ -2463,7 +2465,6 @@ static int s3fs_readdir(const char* path, void* buf, fuse_fill_dir_t filler, off
static int list_bucket(const char* path, S3ObjList& head, const char* delimiter, bool check_content_only)
{
int result;
string s3_realpath;
string query_delimiter;;
string query_prefix;;
@ -2472,7 +2473,6 @@ static int list_bucket(const char* path, S3ObjList& head, const char* delimiter,
bool truncated = true;
S3fsCurl s3fscurl;
xmlDocPtr doc;
BodyData* body;
S3FS_PRN_INFO1("[path=%s]", path);
@ -2495,7 +2495,7 @@ static int list_bucket(const char* path, S3ObjList& head, const char* delimiter,
// For dir with children, expect "dir/" and "dir/child"
query_maxkey += "max-keys=2";
}else{
query_maxkey += "max-keys=1000";
query_maxkey += "max-keys=" + str(max_keys_list_object);
}
while(truncated){
@ -2508,11 +2508,12 @@ static int list_bucket(const char* path, S3ObjList& head, const char* delimiter,
each_query += query_prefix;
// request
int result;
if(0 != (result = s3fscurl.ListBucketRequest(path, each_query.c_str()))){
S3FS_PRN_ERR("ListBucketRequest returns with error.");
return result;
}
body = s3fscurl.GetBodyData();
BodyData* body = s3fscurl.GetBodyData();
// xmlDocPtr
if(NULL == (doc = xmlReadMemory(body->str(), static_cast<int>(body->size()), "", NULL, 0))){
@ -2572,7 +2573,7 @@ static int append_objects_from_xml_ex(const char* path, xmlDocPtr doc, xmlXPathC
return -1;
}
if(xmlXPathNodeSetIsEmpty(contents_xp->nodesetval)){
S3FS_PRN_WARN("contents_xp->nodesetval is empty.");
S3FS_PRN_DBG("contents_xp->nodesetval is empty.");
S3FS_XMLXPATHFREEOBJECT(contents_xp);
return 0;
}
@ -2724,7 +2725,7 @@ static xmlChar* get_base_exp(xmlDocPtr doc, const char* exp)
{
xmlXPathObjectPtr marker_xp;
string xmlnsurl;
string exp_string = "//";
string exp_string;
if(!doc){
return NULL;
@ -2733,8 +2734,11 @@ static xmlChar* get_base_exp(xmlDocPtr doc, const char* exp)
if(!noxmlns && GetXmlNsUrl(doc, xmlnsurl)){
xmlXPathRegisterNs(ctx, (xmlChar*)"s3", (xmlChar*)xmlnsurl.c_str());
exp_string += "s3:";
exp_string = "/s3:ListBucketResult/s3:";
} else {
exp_string = "/ListBucketResult/";
}
exp_string += exp;
if(NULL == (marker_xp = xmlXPathEvalExpression((xmlChar *)exp_string.c_str(), ctx))){
@ -2973,15 +2977,19 @@ static int set_xattrs_to_header(headers_t& meta, const char* name, const char* v
headers_t::iterator iter;
if(meta.end() == (iter = meta.find("x-amz-meta-xattr"))){
#if defined(XATTR_REPLACE)
if(XATTR_REPLACE == (flags & XATTR_REPLACE)){
// there is no xattr header but flags is replace, so failure.
return -ENOATTR;
}
#endif
}else{
#if defined(XATTR_CREATE)
if(XATTR_CREATE == (flags & XATTR_CREATE)){
// found xattr header but flags is only creating, so failure.
return -EEXIST;
}
#endif
strxattrs = iter->second;
}
@ -3375,20 +3383,6 @@ static void* s3fs_init(struct fuse_conn_info* conn)
S3FS_PRN_DBG("Could not initialize cache directory.");
}
// ssl init
if(!s3fs_init_global_ssl()){
S3FS_PRN_CRIT("could not initialize for ssl libraries.");
s3fs_exit_fuseloop(EXIT_FAILURE);
return NULL;
}
// init curl
if(!S3fsCurl::InitS3fsCurl("/etc/mime.types")){
S3FS_PRN_CRIT("Could not initiate curl library.");
s3fs_exit_fuseloop(EXIT_FAILURE);
return NULL;
}
// check loading IAM role name
if(load_iamrole){
// load IAM role name from http://169.254.169.254/latest/meta-data/iam/security-credentials
@ -3433,16 +3427,10 @@ static void s3fs_destroy(void*)
{
S3FS_PRN_INFO("destroy");
// Destroy curl
if(!S3fsCurl::DestroyS3fsCurl()){
S3FS_PRN_WARN("Could not release curl library.");
}
// cache(remove at last)
if(is_remove_cache && (!CacheFileStat::DeleteCacheFileStatDirectory() || !FdManager::DeleteCacheDirectory())){
S3FS_PRN_WARN("Could not remove cache directory.");
}
// ssl
s3fs_destroy_global_ssl();
}
static int s3fs_access(const char* path, int mask)
@ -3645,20 +3633,6 @@ static int s3fs_utility_mode(void)
if(!utility_mode){
return EXIT_FAILURE;
}
// ssl init
if(!s3fs_init_global_ssl()){
S3FS_PRN_EXIT("could not initialize for ssl libraries.");
return EXIT_FAILURE;
}
// init curl
if(!S3fsCurl::InitS3fsCurl("/etc/mime.types")){
S3FS_PRN_EXIT("Could not initiate curl library.");
s3fs_destroy_global_ssl();
return EXIT_FAILURE;
}
printf("Utility Mode\n");
S3fsCurl s3fscurl;
@ -3815,6 +3789,7 @@ static int s3fs_check_service(void)
return EXIT_FAILURE;
}
}
s3fscurl.DestroyCurlHandle();
// make sure remote mountpath exists and is a directory
if(mount_prefix.size() > 0){
@ -3828,46 +3803,45 @@ static int s3fs_check_service(void)
return EXIT_SUCCESS;
}
//
// Read and Parse passwd file
//
// The line of the password file is one of the following formats:
// (1) "accesskey:secretkey" : AWS format for default(all) access key/secret key
// (2) "bucket:accesskey:secretkey" : AWS format for bucket's access key/secret key
// (3) "key=value" : Content-dependent KeyValue contents
//
// This function sets result into bucketkvmap_t, it bucket name and key&value mapping.
// If bucket name is empty(1 or 3 format), bucket name for mapping is set "\t" or "".
//
//
// Read and Parse passwd file
//
// The line of the password file is one of the following formats:
// (1) "accesskey:secretkey" : AWS format for default(all) access key/secret key
// (2) "bucket:accesskey:secretkey" : AWS format for bucket's access key/secret key
// (3) "key=value" : Content-dependent KeyValue contents
//
// This function sets result into bucketkvmap_t, it bucket name and key&value mapping.
// If bucket name is empty(1 or 3 format), bucket name for mapping is set "\t" or "".
//
// Return: 1 - OK(could parse and set mapping etc.)
// 0 - NG(could not read any value)
// -1 - Should shutdown immediately
//
//
static int parse_passwd_file(bucketkvmap_t& resmap)
{
string line;
size_t first_pos;
size_t last_pos;
size_t first_pos;
readline_t linelist;
readline_t::iterator iter;
// open passwd file
// open passwd file
ifstream PF(passwd_file.c_str());
if(!PF.good()){
if(!PF.good()){
S3FS_PRN_EXIT("could not open passwd file : %s", passwd_file.c_str());
return -1;
return -1;
}
// read each line
// read each line
while(getline(PF, line)){
line = trim(line);
line = trim(line);
if(0 == line.size()){
continue;
}
if('#' == line[0]){
continue;
}
if(string::npos != line.find_first_of(" \t")){
if(string::npos != line.find_first_of(" \t")){
S3FS_PRN_EXIT("invalid line in passwd file, found whitespace character.");
return -1;
}
@ -3875,89 +3849,89 @@ static int parse_passwd_file(bucketkvmap_t& resmap)
S3FS_PRN_EXIT("invalid line in passwd file, found a bracket \"[\" character.");
return -1;
}
linelist.push_back(line);
}
// read '=' type
linelist.push_back(line);
}
// read '=' type
kvmap_t kv;
for(iter = linelist.begin(); iter != linelist.end(); ++iter){
first_pos = iter->find_first_of("=");
if(first_pos == string::npos){
continue;
}
// formatted by "key=val"
if(first_pos == string::npos){
continue;
}
// formatted by "key=val"
string key = trim(iter->substr(0, first_pos));
string val = trim(iter->substr(first_pos + 1, string::npos));
if(key.empty()){
continue;
}
if(kv.end() != kv.find(key)){
if(key.empty()){
continue;
}
if(kv.end() != kv.find(key)){
S3FS_PRN_WARN("same key name(%s) found in passwd file, skip this.", key.c_str());
continue;
}
kv[key] = val;
}
// set special key name
resmap[string(keyval_fields_type)] = kv;
// read ':' type
continue;
}
kv[key] = val;
}
// set special key name
resmap[string(keyval_fields_type)] = kv;
// read ':' type
for(iter = linelist.begin(); iter != linelist.end(); ++iter){
first_pos = iter->find_first_of(":");
last_pos = iter->find_last_of(":");
if(first_pos == string::npos){
continue;
}
string bucket;
string accesskey;
string secret;
if(first_pos != last_pos){
// formatted by "bucket:accesskey:secretkey"
bucket = trim(iter->substr(0, first_pos));
accesskey = trim(iter->substr(first_pos + 1, last_pos - first_pos - 1));
secret = trim(iter->substr(last_pos + 1, string::npos));
}else{
// formatted by "accesskey:secretkey"
bucket = allbucket_fields_type;
accesskey = trim(iter->substr(0, first_pos));
secret = trim(iter->substr(first_pos + 1, string::npos));
}
if(resmap.end() != resmap.find(bucket)){
S3FS_PRN_EXIT("same bucket(%s) passwd setting found in passwd file.", ("" == bucket ? "default" : bucket.c_str()));
return -1;
}
kv.clear();
kv[string(aws_accesskeyid)] = accesskey;
kv[string(aws_secretkey)] = secret;
resmap[bucket] = kv;
}
return (resmap.empty() ? 0 : 1);
}
//
first_pos = iter->find_first_of(":");
size_t last_pos = iter->find_last_of(":");
if(first_pos == string::npos){
continue;
}
string bucket;
string accesskey;
string secret;
if(first_pos != last_pos){
// formatted by "bucket:accesskey:secretkey"
bucket = trim(iter->substr(0, first_pos));
accesskey = trim(iter->substr(first_pos + 1, last_pos - first_pos - 1));
secret = trim(iter->substr(last_pos + 1, string::npos));
}else{
// formatted by "accesskey:secretkey"
bucket = allbucket_fields_type;
accesskey = trim(iter->substr(0, first_pos));
secret = trim(iter->substr(first_pos + 1, string::npos));
}
if(resmap.end() != resmap.find(bucket)){
S3FS_PRN_EXIT("there are mutliple entries for the same bucket(%s) in the passwd file.", ("" == bucket ? "default" : bucket.c_str()));
return -1;
}
kv.clear();
kv[string(aws_accesskeyid)] = accesskey;
kv[string(aws_secretkey)] = secret;
resmap[bucket] = kv;
}
return (resmap.empty() ? 0 : 1);
}
//
// Return: 1 - OK(could read and set accesskey etc.)
// 0 - NG(could not read)
// -1 - Should shutdown immediately
//
//
static int check_for_aws_format(const kvmap_t& kvmap)
{
string str1(aws_accesskeyid);
string str2(aws_secretkey);
if(kvmap.empty()){
return 0;
}
if(kvmap.end() == kvmap.find(str1) && kvmap.end() == kvmap.find(str2)){
return 0;
}
if(kvmap.end() == kvmap.find(str1) || kvmap.end() == kvmap.find(str2)){
if(kvmap.empty()){
return 0;
}
if(kvmap.end() == kvmap.find(str1) && kvmap.end() == kvmap.find(str2)){
return 0;
}
if(kvmap.end() == kvmap.find(str1) || kvmap.end() == kvmap.find(str2)){
S3FS_PRN_EXIT("AWSAccesskey or AWSSecretkey is not specified.");
return -1;
}
return -1;
}
if(!S3fsCurl::SetAccessKey(kvmap.at(str1).c_str(), kvmap.at(str2).c_str())){
S3FS_PRN_EXIT("failed to set access key/secret key.");
return -1;
return -1;
}
return 1;
return 1;
}
//
@ -4030,10 +4004,10 @@ static int check_passwd_file_perms(void)
//
static int read_passwd_file(void)
{
bucketkvmap_t bucketmap;
kvmap_t keyval;
bucketkvmap_t bucketmap;
kvmap_t keyval;
int result;
// if you got here, the password file
// exists and is readable by the
// current user, check for permissions
@ -4041,41 +4015,41 @@ static int read_passwd_file(void)
return EXIT_FAILURE;
}
//
// parse passwd file
//
//
// parse passwd file
//
result = parse_passwd_file(bucketmap);
if(-1 == result){
return EXIT_FAILURE;
}
//
// check key=value type format.
//
//
// check key=value type format.
//
if(bucketmap.end() != bucketmap.find(keyval_fields_type)){
// aws format
// aws format
result = check_for_aws_format(bucketmap[keyval_fields_type]);
if(-1 == result){
return EXIT_FAILURE;
}else if(1 == result){
// success to set
}else if(1 == result){
// success to set
return EXIT_SUCCESS;
}
}
}
string bucket_key = allbucket_fields_type;
if(0 < bucket.size() && bucketmap.end() != bucketmap.find(bucket)){
bucket_key = bucket;
}
if(bucketmap.end() == bucketmap.find(bucket_key)){
string bucket_key = allbucket_fields_type;
if(0 < bucket.size() && bucketmap.end() != bucketmap.find(bucket)){
bucket_key = bucket;
}
if(bucketmap.end() == bucketmap.find(bucket_key)){
S3FS_PRN_EXIT("Not found access key/secret key in passwd file.");
return EXIT_FAILURE;
}
keyval = bucketmap[bucket_key];
if(keyval.end() == keyval.find(string(aws_accesskeyid)) || keyval.end() == keyval.find(string(aws_secretkey))){
}
keyval = bucketmap[bucket_key];
if(keyval.end() == keyval.find(string(aws_accesskeyid)) || keyval.end() == keyval.find(string(aws_secretkey))){
S3FS_PRN_EXIT("Not found access key/secret key in passwd file.");
return EXIT_FAILURE;
}
}
if(!S3fsCurl::SetAccessKey(keyval.at(string(aws_accesskeyid)).c_str(), keyval.at(string(aws_secretkey)).c_str())){
S3FS_PRN_EXIT("failed to set internal data for access key/secret key from passwd file.");
return EXIT_FAILURE;
@ -4550,6 +4524,18 @@ static int my_fuse_opt_proc(void* data, const char* arg, int key, struct fuse_ar
is_ibm_iam_auth = true;
return 0;
}
if(0 == STR2NCMP(arg, "ibm_iam_endpoint=")){
std::string endpoint_url = "";
std::string iam_endpoint = strchr(arg, '=') + sizeof(char);
// Check url for http / https protocol string
if((iam_endpoint.compare(0, 8, "https://") != 0) && (iam_endpoint.compare(0, 7, "http://") != 0)) {
S3FS_PRN_EXIT("option ibm_iam_endpoint has invalid format, missing http / https protocol");
return -1;
}
endpoint_url = iam_endpoint + "/oidc/token";
S3fsCurl::SetIAMCredentialsURL(endpoint_url.c_str());
return 0;
}
if(0 == strcmp(arg, "ecs")){
if (is_ibm_iam_auth) {
S3FS_PRN_EXIT("option ecs cannot be used in conjunction with ibm");
@ -4625,6 +4611,15 @@ static int my_fuse_opt_proc(void* data, const char* arg, int key, struct fuse_ar
S3fsCurl::SetReadwriteTimeout(rwtimeout);
return 0;
}
if(0 == strcmp(arg, "list_object_max_keys")){
int max_keys = static_cast<int>(s3fs_strtoofft(strchr(arg, '=') + sizeof(char)));
if(max_keys < 1000){
S3FS_PRN_EXIT("argument should be over 1000: list_object_max_keys");
return -1;
}
max_keys_list_object = max_keys;
return 0;
}
if(0 == STR2NCMP(arg, "max_stat_cache_size=")){
unsigned long cache_size = static_cast<unsigned long>(s3fs_strtoofft(strchr(arg, '=') + sizeof(char)));
StatCache::getStatCacheData()->SetCacheSize(cache_size);
@ -4673,8 +4668,6 @@ static int my_fuse_opt_proc(void* data, const char* arg, int key, struct fuse_ar
S3FS_PRN_EXIT("multipart_size option must be at least 5 MB.");
return -1;
}
// update ensure free disk space if it is not set.
FdManager::InitEnsureFreeDiskSpace();
return 0;
}
if(0 == STR2NCMP(arg, "ensure_diskfree=")){
@ -4735,6 +4728,11 @@ static int my_fuse_opt_proc(void* data, const char* arg, int key, struct fuse_ar
found = host.find_last_of('/');
length = host.length();
}
// Check url for http / https protocol string
if((host.compare(0, 8, "https://") != 0) && (host.compare(0, 7, "http://") != 0)) {
S3FS_PRN_EXIT("option url has invalid format, missing http / https protocol");
return -1;
}
return 0;
}
if(0 == strcmp(arg, "sigv2")){
@ -4777,6 +4775,11 @@ static int my_fuse_opt_proc(void* data, const char* arg, int key, struct fuse_ar
cipher_suites = strchr(arg, '=') + sizeof(char);
return 0;
}
if(0 == STR2NCMP(arg, "instance_name=")){
instance_name = strchr(arg, '=') + sizeof(char);
instance_name = "[" + instance_name + "]";
return 0;
}
//
// debug option for s3fs
//
@ -4860,9 +4863,8 @@ int main(int argc, char* argv[])
LIBXML_TEST_VERSION
// get program name - emulate basename
size_t found = string::npos;
program_name.assign(argv[0]);
found = program_name.find_last_of("/");
size_t found = program_name.find_last_of("/");
if(found != string::npos){
program_name.replace(0, found+1, "");
}
@ -4901,6 +4903,19 @@ int main(int argc, char* argv[])
exit(EXIT_FAILURE);
}
// ssl init
if(!s3fs_init_global_ssl()){
S3FS_PRN_EXIT("could not initialize for ssl libraries.");
exit(EXIT_FAILURE);
}
// init curl
if(!S3fsCurl::InitS3fsCurl("/etc/mime.types")){
S3FS_PRN_EXIT("Could not initiate curl library.");
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
// clear this structure
memset(&s3fs_oper, 0, sizeof(s3fs_oper));
@ -4909,6 +4924,8 @@ int main(int argc, char* argv[])
// should have been set
struct fuse_args custom_args = FUSE_ARGS_INIT(argc, argv);
if(0 != fuse_opt_parse(&custom_args, NULL, NULL, my_fuse_opt_proc)){
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
@ -4917,10 +4934,14 @@ int main(int argc, char* argv[])
//
if(REDUCED_REDUNDANCY == S3fsCurl::GetStorageClass() && !S3fsCurl::IsSseDisable()){
S3FS_PRN_EXIT("use_sse option could not be specified with storage class reduced_redundancy.");
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
if(!S3fsCurl::FinalCheckSse()){
S3FS_PRN_EXIT("something wrong about SSE options.");
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
@ -4928,12 +4949,16 @@ int main(int argc, char* argv[])
if(bucket.size() == 0){
S3FS_PRN_EXIT("missing BUCKET argument.");
show_usage();
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
// bucket names cannot contain upper case characters in virtual-hosted style
if((!pathrequeststyle) && (lower(bucket) != bucket)){
S3FS_PRN_EXIT("BUCKET %s, name not compatible with virtual-hosted style.", bucket.c_str());
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
@ -4941,6 +4966,8 @@ int main(int argc, char* argv[])
found = bucket.find_first_of("/:\\;!@#$%^&*?|+=");
if(found != string::npos){
S3FS_PRN_EXIT("BUCKET %s -- bucket name contains an illegal character.", bucket.c_str());
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
@ -4952,6 +4979,8 @@ int main(int argc, char* argv[])
if(mountpoint.size() == 0){
S3FS_PRN_EXIT("missing MOUNTPOINT argument.");
show_usage();
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
}
@ -4959,18 +4988,26 @@ int main(int argc, char* argv[])
// error checking of command line arguments for compatibility
if(S3fsCurl::IsPublicBucket() && S3fsCurl::IsSetAccessKeys()){
S3FS_PRN_EXIT("specifying both public_bucket and the access keys options is invalid.");
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
if(passwd_file.size() > 0 && S3fsCurl::IsSetAccessKeys()){
S3FS_PRN_EXIT("specifying both passwd_file and the access keys options is invalid.");
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
if(!S3fsCurl::IsPublicBucket() && !load_iamrole && !is_ecs){
if(EXIT_SUCCESS != get_access_keys()){
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
if(!S3fsCurl::IsSetAccessKeys()){
S3FS_PRN_EXIT("could not establish security credentials, check documentation.");
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
// More error checking on the access key pair can be done
@ -4980,6 +5017,8 @@ int main(int argc, char* argv[])
// check cache dir permission
if(!FdManager::CheckCacheDirExist() || !FdManager::CheckCacheTopDir() || !CacheFileStat::CheckCacheFileStatTopDir()){
S3FS_PRN_EXIT("could not allow cache directory permission, check permission of cache directories.");
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
@ -4994,12 +5033,16 @@ int main(int argc, char* argv[])
S3fsCurl::SetDefaultAcl("");
}else if(defaultACL != "public-read"){
S3FS_PRN_EXIT("can only use 'public-read' or 'private' ACL while using ibm_iam_auth");
return -1;
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
if(create_bucket && !S3fsCurl::IsSetAccessKeyID()){
S3FS_PRN_EXIT("missing service instance ID for bucket creation");
return -1;
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
}
@ -5033,13 +5076,18 @@ int main(int argc, char* argv[])
*/
if(utility_mode){
exit(s3fs_utility_mode());
int exitcode = s3fs_utility_mode();
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(exitcode);
}
// check free disk space
FdManager::InitEnsureFreeDiskSpace();
if(!FdManager::IsSafeDiskSpace(NULL, S3fsCurl::GetMultipartSize())){
if(!FdManager::IsSafeDiskSpace(NULL, S3fsCurl::GetMultipartSize() * S3fsCurl::GetMaxParallelCount())){
S3FS_PRN_EXIT("There is no enough disk space for used as cache(or temporary) directory by s3fs.");
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
@ -5086,6 +5134,8 @@ int main(int argc, char* argv[])
// set signal handler for debugging
if(!set_s3fs_usr2_handler()){
S3FS_PRN_EXIT("could not set signal handler for SIGUSR2.");
S3fsCurl::DestroyS3fsCurl();
s3fs_destroy_global_ssl();
exit(EXIT_FAILURE);
}
@ -5093,6 +5143,10 @@ int main(int argc, char* argv[])
fuse_res = fuse_main(custom_args.argc, custom_args.argv, &s3fs_oper, NULL);
fuse_opt_free_args(&custom_args);
// Destroy curl
if(!S3fsCurl::DestroyS3fsCurl()){
S3FS_PRN_WARN("Could not release curl library.");
}
s3fs_destroy_global_ssl();
// cleanup xml2

View File

@ -453,6 +453,7 @@ AutoLock::~AutoLock()
string get_username(uid_t uid)
{
static size_t maxlen = 0; // set once
int result;
char* pbuf;
struct passwd pwinfo;
struct passwd* ppwinfo = NULL;
@ -461,9 +462,17 @@ string get_username(uid_t uid)
if(0 == maxlen){
long res = sysconf(_SC_GETPW_R_SIZE_MAX);
if(0 > res){
S3FS_PRN_WARN("could not get max pw length.");
maxlen = 0;
return string("");
// SUSv4tc1 says the following about _SC_GETGR_R_SIZE_MAX and
// _SC_GETPW_R_SIZE_MAX:
// Note that sysconf(_SC_GETGR_R_SIZE_MAX) may return -1 if
// there is no hard limit on the size of the buffer needed to
// store all the groups returned.
if (errno != 0){
S3FS_PRN_WARN("could not get max pw length.");
maxlen = 0;
return string("");
}
res = 1024; // default initial length
}
maxlen = res;
}
@ -471,12 +480,22 @@ string get_username(uid_t uid)
S3FS_PRN_CRIT("failed to allocate memory.");
return string("");
}
// get group information
if(0 != getpwuid_r(uid, &pwinfo, pbuf, maxlen, &ppwinfo)){
S3FS_PRN_WARN("could not get pw information.");
// get pw information
while(ERANGE == (result = getpwuid_r(uid, &pwinfo, pbuf, maxlen, &ppwinfo))){
free(pbuf);
maxlen *= 2;
if(NULL == (pbuf = (char*)malloc(sizeof(char) * maxlen))){
S3FS_PRN_CRIT("failed to allocate memory.");
return string("");
}
}
if(0 != result){
S3FS_PRN_ERR("could not get pw information(%d).", result);
free(pbuf);
return string("");
}
// check pw
if(NULL == ppwinfo){
free(pbuf);
@ -498,10 +517,18 @@ int is_uid_include_group(uid_t uid, gid_t gid)
// make buffer
if(0 == maxlen){
long res = sysconf(_SC_GETGR_R_SIZE_MAX);
if(0 > res){
S3FS_PRN_ERR("could not get max name length.");
maxlen = 0;
return -ERANGE;
if(0 > res) {
// SUSv4tc1 says the following about _SC_GETGR_R_SIZE_MAX and
// _SC_GETPW_R_SIZE_MAX:
// Note that sysconf(_SC_GETGR_R_SIZE_MAX) may return -1 if
// there is no hard limit on the size of the buffer needed to
// store all the groups returned.
if (errno != 0) {
S3FS_PRN_ERR("could not get max name length.");
maxlen = 0;
return -ERANGE;
}
res = 1024; // default initial length
}
maxlen = res;
}
@ -594,7 +621,7 @@ int mkdirp(const string& path, mode_t mode)
return EPERM;
}
}else{
if(0 != mkdir(base.c_str(), mode)){
if(0 != mkdir(base.c_str(), mode) && errno != EEXIST){
return errno;
}
}
@ -981,7 +1008,7 @@ void show_help (void)
" header. see http://aws.amazon.com/documentation/s3/ for the\n"
" full list of canned acls\n"
"\n"
" retries (default=\"2\")\n"
" retries (default=\"5\")\n"
" - number of times to retry a failed s3 transaction\n"
"\n"
" use_cache (default=\"\" which means disabled)\n"
@ -1083,7 +1110,11 @@ void show_help (void)
" readwrite_timeout (default=\"60\" seconds)\n"
" - time to wait between read/write activity before giving up\n"
"\n"
" max_stat_cache_size (default=\"1000\" entries (about 4MB))\n"
" list_object_max_keys (default=\"1000\")\n"
" - specify the maximum number of keys returned by S3 list object\n"
" API. The default is 1000. you can set this value to 1000 or more.\n"
"\n"
" max_stat_cache_size (default=\"100,000\" entries (about 40MB))\n"
" - maximum number of entries in the stat cache\n"
"\n"
" stat_cache_expire (default is no expire)\n"
@ -1125,7 +1156,7 @@ void show_help (void)
" multipart_size (default=\"10\")\n"
" - part size, in MB, for each multipart request.\n"
"\n"
" ensure_diskfree (default same multipart_size value)\n"
" ensure_diskfree (default 0)\n"
" - sets MB to ensure disk free space. s3fs makes file for\n"
" downloading, uploading and caching files. If the disk free\n"
" space is smaller than this value, s3fs do not use diskspace\n"
@ -1182,6 +1213,9 @@ void show_help (void)
" In this mode, the AWSAccessKey and AWSSecretKey will be used as\n"
" IBM's Service-Instance-ID and APIKey, respectively.\n"
"\n"
" ibm_iam_endpoint (default is https://iam.bluemix.net)\n"
" - sets the url to use for IBM IAM authentication.\n"
"\n"
" use_xattr (default is not handling the extended attribute)\n"
" Enable to handle the extended attribute(xattrs).\n"
" If you set this option, you can use the extended attribute.\n"
@ -1240,6 +1274,9 @@ void show_help (void)
" can be found on the CURL library documentation:\n"
" https://curl.haxx.se/docs/ssl-ciphers.html\n"
"\n"
" instance_name - The instance name of the current s3fs mountpoint.\n"
" This name will be added to logging messages and user agent headers sent by s3fs.\n"
"\n"
" complement_stat (complement lack of file/directory mode)\n"
" s3fs complements lack of information about file/directory mode\n"
" if a file or a directory object does not have x-amz-meta-mode\n"

View File

@ -303,7 +303,7 @@ char* s3fs_base64(const unsigned char* input, size_t length)
if(!input || 0 >= length){
return NULL;
}
if(NULL == (result = (char*)malloc((((length / 3) + 1) * 4 + 1) * sizeof(char)))){
if(NULL == (result = reinterpret_cast<char*>(malloc((((length / 3) + 1) * 4 + 1) * sizeof(char))))){
return NULL; // ENOMEM
}
@ -353,7 +353,7 @@ unsigned char* s3fs_decode64(const char* input, size_t* plength)
if(!input || 0 == strlen(input) || !plength){
return NULL;
}
if(NULL == (result = (unsigned char*)malloc((strlen(input) + 1)))){
if(NULL == (result = reinterpret_cast<unsigned char*>(malloc((strlen(input) + 1))))){
return NULL; // ENOMEM
}

View File

@ -50,7 +50,7 @@ export S3_URL
export TEST_SCRIPT_DIR=`pwd`
export TEST_BUCKET_MOUNT_POINT_1=${TEST_BUCKET_1}
S3PROXY_VERSION="1.5.3"
S3PROXY_VERSION="1.6.0"
S3PROXY_BINARY=${S3PROXY_BINARY-"s3proxy-${S3PROXY_VERSION}"}
if [ ! -f "$S3FS_CREDENTIALS_FILE" ]