153 Commits
v1.77 ... v1.79

Author SHA1 Message Date
cbc057bca7 Merge pull request #211 from s3fs-fuse/release179
Updated ChangeLog and configure.ac for v1.79
2015-07-20 01:23:35 +09:00
6442642656 Updated ChangeLog and configure.ac for v1.79 2015-07-19 16:14:33 +00:00
07a5a36b6a Merge pull request #207 from jalessio/fix_a_few_spelling_issues
Fixed a few small spelling issues.
2015-07-12 01:02:53 +09:00
912bc58df0 Fixed a few small spelling issues. 2015-07-10 11:50:40 -07:00
13a91a52e8 Merge pull request #204 from andrewgaul/xattr-test
Add integration test for xattr
2015-06-29 00:51:24 +09:00
4190130194 Merge pull request #202 from flandr/osx-xattr
Specialize {set,get}xattr for OS X
2015-06-29 00:29:57 +09:00
d9b124f91e Add integration test for xattr 2015-06-28 04:16:35 -07:00
9b3c87ec97 Specialize {set,get}xattr for OS X
These system calls take an extra 'position' parameter on OS X. A
non-zero position value is only valid for resource forks (the Darwin
VFS layer will reject anything else with EINVAL); this patch simply
adds and ignores the parameter on Apple platforms.

Allows building against OSXFUSE.
2015-06-25 12:56:15 -07:00
8f85e5e543 Merge pull request #200 from s3fs-fuse/fixbug
fixed fallback to sigv2 for bucket create and GCS
2015-06-20 13:45:45 +09:00
966d229787 fixed fallback to sigv2 for bucket create and GCS 2015-06-20 04:34:32 +00:00
4d49ace06b Merge pull request #192 from andrewgaul/special-characters
Simplify URL encoding
2015-06-20 11:47:22 +09:00
ad8c64104e Merge pull request #199 from s3fs-fuse/xattr
Supported extended attributes(retry)
2015-06-20 11:46:47 +09:00
d59eff4288 Merge pull request #198 from andrewgaul/travis
Disasble integration tests for Travis
2015-06-20 10:42:23 +09:00
219b155037 Disasble integration tests for Travis
The previous KVM infrastructure supported this but their new VMware
infrastructure does not.
2015-06-18 11:28:10 -07:00
fe3abed9f0 Chaged codes about iterator etc 2015-06-13 03:27:07 +00:00
0ecf4aa6b4 Chaged codes about iterator 2015-06-13 03:08:56 +00:00
477573265a Merge pull request #190 from Rotwang/master
Add a no_check_certificate option.
2015-06-13 11:12:35 +09:00
4e03acf17a Simplify URL encoding
This also encodes asterisk and tilde correctly when listing a file
with a V4 auth endpoint.  Also add tests for special characters
although s3proxy does not yet support V4 auth.
Fixes #188.  Fixes #194.
2015-06-10 13:15:58 -07:00
84fb3d83d8 Fixed xattr for binary value 2015-06-06 16:39:39 +00:00
3522e5eda3 Add no_check_certificate option which allows to ignore issues with self signed certs. 2015-05-20 17:32:36 +02:00
3056644969 Merge pull request #185 from andrewgaul/typos
Correct obvious typos in usage and README
2015-05-06 22:37:22 +09:00
91587ad2c8 Merge pull request #184 from andrewgaul/multipart-size
Add usage information for multipart_size
2015-05-06 22:36:37 +09:00
8a73d9fff0 Correct obvious typos in usage and README 2015-05-04 16:25:05 -07:00
28ee9f27b9 Add usage information for multipart_size
Also improve error message.
2015-05-04 16:21:58 -07:00
7ac58a1c69 Merge pull request #178 from andrewgaul/gitignore
Update .gitignore
2015-04-29 00:56:54 +09:00
3914281f1b Merge pull request #177 from andrewgaul/mailmap
Add .mailmap
2015-04-29 00:56:44 +09:00
3d734ad3e3 Merge pull request #176 from mooredan/master
configure.ac: detect target, if target is darwin (OSX), then
2015-04-29 00:55:35 +09:00
bb4075d7b9 Merge pull request #173 from andrewgaul/travis
Run integration tests via Travis
2015-04-29 00:54:14 +09:00
5b11ac0f4c Moved __APPLE__ #endif to correct position 2015-04-27 12:14:09 -07:00
7bc5f0ca13 Update .gitignore 2015-04-27 11:19:14 -07:00
14ce061215 Add .mailmap
This cleans up git shortlog output.
2015-04-27 11:17:39 -07:00
adb5a35097 configure.ac: detect target, if target is darwin (OSX), then
change the minimum version of fuse required.  Change the
checkers to use a variable for the minimum fuse version
instead of it being hardcoded in four different places.

src/s3fs.cpp: Use __APPLE__ define around fuse code that
is offensive to osxfuse. Not including the code doesn't
seem to matter.
2015-04-25 17:13:20 -07:00
b0a12bcac1 Disable rename_before_close
This test currently fails and interferes with the larger integration
test.  References #145.
2015-04-24 11:28:18 -07:00
39d4715b82 Run integration tests via Travis
Mail from the Travis team:

Thanks for the email. I have set up s3fs-fuse/s3fs-fuse with our alpha
testing stack which may allow you to use FUSE.

To use it, add the following to your .travis.yml:
dist: trusty

Please keep in mind that the service may become unavailable without
notice, and change details. We welcome your feedback as to what works
and what does not with this setup.
2015-04-23 21:25:24 -07:00
aac92bd6c0 Fixed wrong owner checking and return codes 2015-04-21 16:18:05 +00:00
f258a14070 Supported extended attributes, initial commit 2015-04-20 17:24:57 +00:00
3701f1c16b Merge pull request #171 from pabigot/mixedcase
Support buckets with mixed-case names
2015-04-21 02:06:07 +09:00
92fcee824b curl: use pathrequeststyle option when constructing Host endpoint
Buckets with mixed-case names can't be accessed with the virtual-hosted
style API due to DNS limitations.  S3FS has an option for
pathrequeststyle which is used for the URL, but it was not applied when
building the endpoint passed through the Host header.  Fix this, and
relax the validation on bucket names when using this style.

See: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2015-04-19 08:31:40 -05:00
00f8e1d0ba Merge pull request #170 from s3fs-fuse/issue/#157
Reviewed and fixed response codes print in curl.cpp - #157
2015-04-18 23:37:28 +09:00
43191eea53 Added cache apt in travis.yml 2015-04-18 13:45:58 +00:00
490ed8f689 Reviewed and fixed response codes print in curl.cpp - #157 2015-04-18 13:32:04 +00:00
30152284cc Merge pull request #168 from kahing/fix-v4-host-endpoint
switch to use region specific endpoints to compute correct v4 signature
2015-04-18 18:02:45 +09:00
70097709b2 switch to use region specific endpoints to compute correct v4 signature
fix #133
2015-04-14 16:25:17 -07:00
07e007052a Merge pull request #167 from s3fs-fuse/timeoutbranch
Increased default connecting/reading/writing timeout value
2015-04-12 11:18:52 +09:00
bd27294ab0 Increased default connecting/reading/writing timeout value 2015-04-12 02:04:13 +00:00
5e5c20757b Merge pull request #165 from kahing/auth_v4_refactor
Auth v4 refactor
2015-04-12 08:13:25 +09:00
6231ae208a Merge pull request #164 from kahing/fix_v4_signing_host
send the correct Host header when using -o url
2015-04-12 08:12:51 +09:00
42a4f5fd95 Merge pull request #159 from andrewgaul/s3proxy-1.4.0
Upgrade to S3Proxy 1.4.0
2015-04-12 08:05:49 +09:00
6e0a302f7d refactor sigv4 to reduce code duplication 2015-04-09 15:11:59 -07:00
98af055d8b send the correct Host header when using -o url
fixes #161
2015-04-09 13:53:50 -07:00
fa5c7ff4df Upgrade to S3Proxy 1.4.0
Release notes:

https://github.com/andrewgaul/s3proxy/releases/tag/s3proxy-1.4.0
2015-03-29 23:59:39 -07:00
d7327df885 Merge pull request #156 from s3fs-fuse/issue/#126
Fixed a bug about ssl session sharing with libcurl older 7.23.0 - issue#126
2015-03-21 16:19:58 +09:00
0f13c8fe97 Fixed a bug about ssl session sharing with libcurl older 7.23.0 - issue/#126 2015-03-21 07:04:20 +00:00
44d740080b Merge pull request #155 from s3fs-fuse/bugfix
Fixed a bug: unable to mount bucket subdirectory
2015-03-21 13:39:19 +09:00
2fc3a4e91e Fixed a bug: unable to mount bucket subdirectory 2015-03-21 04:31:59 +00:00
66e0233410 Merge pull request #154 from s3fs-fuse/issue#149
Fixed url-encoding for ampersand etc on sigv4 - Improvement/#149
2015-03-21 11:32:08 +09:00
a04bec85b2 Fixed url-encoding for ampersand etc on sigv4 - Improvement/#149 2015-03-21 02:11:55 +00:00
f861b11a91 Merge pull request #147 from andrewgaul/s3proxy-snapshot
Use S3Proxy 1.4.0-SNAPSHOT
2015-03-11 01:41:58 +09:00
37f9bbd231 Merge pull request #146 from kahing/exit_handler_for_test
add exit handler to cleanup on failures
2015-03-11 01:41:42 +09:00
af004576f1 Merge pull request #150 from s3fs-fuse/fixbug
Fixed a bug not handling fsync - #145
2015-03-11 01:29:17 +09:00
26453c4874 Fixed a bug not handling fsync. 2015-03-10 16:18:03 +00:00
4e18bf0bc2 Use S3Proxy 1.4.0-SNAPSHOT 2015-03-09 18:05:14 -07:00
7c298e94f5 add exit handler to cleanup on failures
and other changes that make debugging easier
2015-03-09 15:56:38 -07:00
761d2399f2 Merge pull request #144 from andrewgaul/travis
Add Travis configuration
2015-03-10 01:37:50 +09:00
1210cf8c6c Add Travis configuration 2015-03-09 03:57:39 -07:00
524e005b5c Merge pull request #143 from s3fs-fuse/issue#141
Fixed a bug no use_cache case about fixed #138 - issue#141
2015-03-09 01:43:57 +09:00
d06b6d7d41 Fixed a bug no use_cache case about fixed #138 - issue#141 2015-03-08 16:41:14 +00:00
e66e5d1dfc Merge pull request #138 from s3fs-fuse/issue#97
Fixed bugs, not turn use_cache off and ty to load to end - issue#97
2015-03-04 17:52:22 +09:00
114966e7c0 Fixed bugs, not turn use_cache off and ty to load to end - issue#97 2015-03-04 08:48:37 +00:00
d2246297bd Merge pull request #137 from andrewgaul/integration-test-mpu
Add test for multi-part upload
2015-03-04 12:21:22 +09:00
8ec5decbce Merge pull request #136 from andrewgaul/integration-test-fixups
Small fixes to integration tests
2015-03-04 12:20:17 +09:00
0f7d77d599 Small fixes to integration tests
Use S3Proxy pid instead of self pid, ensure correct passwd
permissions, and use fusermount instead of umount so that non-root can
run tests.
2015-03-03 01:42:03 -08:00
699e3b3d79 Add test for multi-part upload 2015-03-02 17:17:30 -08:00
2f8ad7ace8 Merge pull request #135 from andrewgaul/mpu-v4
Correct V4 signature for initiate multipart upload
2015-03-01 22:57:10 +09:00
6b6567ec9b Merge pull request #134 from andrewgaul/mpu-v2
Include Content-Type in complete MPU V2 signature
2015-03-01 22:50:54 +09:00
c8c71650eb Merge pull request #131 from kahing/test-ls
Test ls
2015-03-01 22:47:55 +09:00
a07e804f57 Include Content-Type in complete MPU V2 signature
Previously this failed with SignatureDoesNotMatch since the headers
included it but the signature did not.  Fixes #125.
2015-02-28 18:03:21 -08:00
e9656810e3 Correct V4 signature for initiate multipart upload
Query parameters need a trailing = for V4 signatures.  Send correct
content-sha256 although Amazon does not seem to enforce this for
zero-length bodies.  Finally remove a stale comment.  Fixes #133.
2015-02-28 17:50:06 -08:00
4ee32d7559 test ls after creating files and dirs 2015-02-27 10:55:25 -08:00
53083202ba Merge pull request #132 from andrewgaul/s3proxy-integration-test
Use S3Proxy to run integration tests
2015-02-27 00:17:46 +09:00
574a48f81f Merge pull request #130 from kahing/refactor-integration-test
refactor integration tests create/cleanup file
2015-02-27 00:06:23 +09:00
1b1cf2d4bd Merge pull request #124 from timuralp/bug/fix_fallback_v2
Fallback to v2 signatures correctly.
2015-02-27 00:02:12 +09:00
e811ae1104 Use s3proxy to run integration tests
References #129.
2015-02-24 12:08:22 -08:00
d65bf4128d refactor integration tests create/cleanup file 2015-02-23 12:08:14 -08:00
be5735edb8 Fallback to v2 signatures correctly.
Missing parameter to SetSignatureV4() call in the fallback code path
results in not actually falling back.
2015-02-16 17:35:09 -08:00
5bf2b46fa3 Merge pull request #119 from s3fs-fuse/issue#107
Added new mp_umask option about issue#107, pr#110
2015-02-08 02:19:54 +09:00
cf2b0cca22 Added new mp_umask option about issue#107, pr#110 2015-02-07 17:16:45 +00:00
4ae5043534 Merge pull request #116 from s3fs-fuse/dev_sv4
Supported signature version 4
2015-02-03 01:43:48 +09:00
1424f87754 Supported signature version 4 for GnuTLS/NSS and automatically set endpoint/sigv2 2015-02-02 16:36:08 +00:00
4f953f9bd7 Clean codes for signature v4 and added new sigv2 option 2015-01-28 17:13:11 +00:00
0d2f3e2dc4 Fixed bugs, segfault and signature error at listing. 2015-01-24 16:36:30 +00:00
bb1f1d3faa Merged manually from caxapniy/s3fs-fuse/tree/1.77v4merge for signature v4 - #102 2015-01-20 16:31:36 +00:00
98daf16681 Merge pull request #104 from kahing/rename_before_close
fix rename before close
2015-01-14 00:40:41 +09:00
939ba2b4b3 Merge pull request #101 from adobos/directory_empty_optimization
Optimized function "bool directory_empty()"
2015-01-14 00:21:47 +09:00
d0b82428d5 Merge pull request #100 from adobos/dns_ssl_switch_bugfix
CURL handles not properly initialized to use DNS or SSL session caching.
2015-01-14 00:11:46 +09:00
902911765e Merge pull request #93 from andrewgaul/unit-test
Add simple unit tests for trim functions
2015-01-14 00:07:01 +09:00
03d84a07d1 fix rename before close
nautilus does this when you drag and drop to overwrite a file:

1) create .goutputstream-XXXXXX to write to
2) fsync the fd for .goutputstream-XXXXXX
3) rename .goutputstream-XXXXXX to target file
4) close the fd for .goutputstream-XXXXXX

previously, doing this on s3fs would result in an empty target file
because after the rename, s3fs would not flush the content of
.goutputstream-XXXXXX to target file.

this change moves the FdEntity from the old path to the new path
whenever rename happens. On flush s3fs would now flush the correct
content to the rename target.
2015-01-12 15:05:54 -08:00
1f686d93ff Merge pull request #103 from s3fs-fuse/issue#87
Remove prefix option in s3fs man page - issue#87
2015-01-06 23:49:11 +09:00
d95b9ef1ac Remove prefix option in s3fs man page - issue#87 2015-01-06 14:43:19 +00:00
045f1e7906 CURL handles were not properly initialized to use DNS caching, or SSL session caching. 2014-12-23 22:31:54 -08:00
69ef7fbefb Optimized function directory_empty: check for at most one entry when evaluating whether a directory is empty or not (as opposed to doing full directory listing) 2014-12-23 22:29:13 -08:00
a56b8db410 Add simple unit tests for trim functions
Subsequent commits will use this infrastructure.  Also reparent
prepare_url which relies on unrelated bucket, foreground2, and
pathrequeststyle symbols.
2014-12-06 18:07:14 -08:00
082eb24c12 Merge pull request #83 from tmwong2003/develop
Changed option processing to use strtol() to get a umask
2014-11-16 23:49:24 +09:00
f04b659f5e Changed option processing to use strtol() to get a umask
get_mode()/s3fs_strtoofft() does not handle octal umask values, which
results in unexpected behavior when trying to set a world-readable umask
value.
2014-11-12 23:29:41 +00:00
eedc621637 Merge pull request #79 from buptUnixGuys/master
Update curl.cpp
2014-11-09 00:05:04 +09:00
b31ec5c4af Update curl.cpp
The space causes signature mismatch when using "ahbe_conf" file to add additional headers.When s3 use the" x-amaz" header to calculates the signature, the format is as follow:
PUT

application/octet-stream
Wed, 05 Nov 2014 03:05:08 GMT
x-amz-acl:private
x-amz-meta-gid:0
x-amz-meta-mode:33188
x-amz-meta-mtime:1415156708
x-amz-meta-uid:0
There is no space after colon.
2014-11-05 11:28:33 +08:00
651e8c3158 Merge pull request #64 from andrewgaul/failed-read-eio
Return EIO on failed read
2014-11-03 01:03:32 +09:00
77d4d066b5 Merge pull request #74 from vincentbernat/fix/url-may-omit-scheme
url: handle scheme omission
2014-10-26 16:18:03 +09:00
1e97e99aa0 Merge pull request #73 from vincentbernat/fix/git-ignore
Small gitignore fixes
2014-10-26 16:12:44 +09:00
7212072ff0 url: handle scheme omission
When the scheme is omitted in URL overriding (for example `example.com`
instead of `https://example.com`), s3fs is modifying the URL by
inserting `s3.` in the middle of the name  (`examples3..com`).

This can be a bit difficult to troubleshoot and curl seems to handle
schema-less requests just fine. So, just handle this case correctly.
2014-10-23 10:25:17 +02:00
8bcab645e1 gitignore: add test-driver and compile
Those are generated by latest versions of autotools.
2014-10-23 10:02:59 +02:00
9013917d58 gitignore: use absolute path
The current content of `.gitignore` is using relative paths. For
example, `test/config.log` would be ignored while it doesn't seem to be
the intent. Use absolute paths. They are still relative to the root of
the repository.
2014-10-23 10:01:08 +02:00
1eddf92c35 Merge pull request #72 from s3fs-fuse/issue#68
Fixed #68(FreeBSD issue)
2014-10-22 23:30:32 +09:00
28d82c9ccd Fixed #68(FreeBSD issue) 2014-10-22 14:21:01 +00:00
2f90a04513 Merge pull request #71 from s3fs-fuse/issue#68
Fixed for #68(FreeBSD issue)
2014-10-21 23:58:20 +09:00
2724728476 Merge pull request #70 from s3fs-fuse/master
Fixed for #68(FreeBSD issue)
2014-10-21 23:56:41 +09:00
ed8f424c1a Merge pull request #69 from andrewgaul/always-true
Address clang always true warnings
2014-10-21 23:53:30 +09:00
50137fe026 Address clang always true warnings 2014-10-16 23:34:12 -07:00
9237d07226 Merge pull request #63 from jollyroger/spelling
Fix spelling errors
2014-10-13 11:38:13 +09:00
8c2be4aa85 Merge pull request #62 from jollyroger/fix-stray-chars
Remove stray chars from source files
2014-10-13 11:34:52 +09:00
ccaed9a91c Merge pull request #60 from andrewgaul/check-bucket-disable-fail-on-error
Emit user-friendly log messages on failed CheckBucket requests
2014-10-13 11:33:17 +09:00
a1ca8b7124 Return EIO on failed read
Previously S3fsMultiCurl::MultiRead did not report read errors since
it did not treat failed callback setup as a fatal operation error.
Failed callback setups usually result from exceeding the number of
allowed retries.  Previously cp did not report an error during a
network outage but now does:

$ cp ~/s3-path/s3-file .
cp: error reading ‘/home/gaul/s3-path/s3-file’: Input/output error
cp: failed to extend ‘./s3-file’: Input/output error
2014-10-03 21:30:11 -07:00
6633366218 Fix spelling errors 2014-10-01 13:42:39 +03:00
22ea65f02c Remove stray chars from source files 2014-10-01 13:20:29 +03:00
3d69ee0c30 Emit response on failed CheckBucket requests
This allows callers to diagnose errors like InvalidAccessKeyId and
RequestTimeTooSkewed.
2014-09-28 16:12:53 -07:00
c88a5f38be Disable CURLOPT_FAILONERROR for CheckBucket
curl will not consume the body of a response when CURLOPT_FAILONERROR
is set.  This prevents logging of responses for failed requests.
2014-09-28 16:12:43 -07:00
38e6857824 Merge pull request #56 from s3fs-fuse/version1.78
version number increament.
2014-09-15 22:30:51 +09:00
ca72b9a6d0 version number increament. 2014-09-15 13:26:35 +00:00
741831344a Merge pull request #55 from s3fs-fuse/cleanupcodes
Cleaned up codes for next packaging.
2014-09-08 00:10:49 +09:00
7a7c7572ea Cleaned up codes for next packaging. 2014-09-07 15:08:27 +00:00
4c32bc0aa5 Merge pull request #54 from s3fs-fuse/bugfix#50
fixed a bug about configure.ac. issue #50
2014-09-07 22:58:07 +09:00
0e9cfeb808 fixed a bug about configure.ac. issue #50 2014-09-07 13:53:20 +00:00
ae4ae88b6d Merge pull request #53 from s3fs-fuse/sse-c
Support for SSE-C #39
2014-09-07 22:31:33 +09:00
f0c33f8ef2 clean codes 2014-08-27 00:59:49 +00:00
e3a33343b9 Merge pull request #51 from masahide/master
Removed BOM
2014-08-27 02:24:47 +09:00
20b1c207be fixed issue #39 2014-08-26 17:11:10 +00:00
f1ca5d0340 :set nobomb 2014-08-25 19:18:34 +09:00
cbec8da9a3 fixed a bug issue #49 2014-08-14 15:58:06 +00:00
7a55eab399 Support for SSE-C, issue #39 2014-07-19 19:02:55 +00:00
95f8cab139 Merge pull request #44 from s3fs-fuse/fixbug#40
Fixed a bug issue #40
2014-06-29 02:42:49 +09:00
c1a6d76fc3 Fixed a bug issue #40 2014-06-28 17:36:35 +00:00
08929696f7 Merge pull request #43 from s3fs-fuse/fixbug#41
Fixed a bug issue #41
2014-06-29 02:26:10 +09:00
ba34ba181a Fixed a bug issue #41 2014-06-28 17:24:25 +00:00
d2c887a371 Merge pull request #38 from s3fs-fuse/path-request-style
Added explanation in man page for support for path API request style.
2014-06-03 23:48:27 +09:00
d5113c0501 Added explanation in man page for support for path API request style. 2014-06-03 14:45:39 +00:00
29a37645dd Merge pull request #37 from Andrew-Dunn/path-request-style
Added support for path API request style.
2014-06-03 23:37:00 +09:00
601482eff5 Added support for path API request style.
Rather than using virtual host style requests, path style requests can be used
instead.

i.e. rather than bucketname.s3.amazon.com/... the s3fs will be able to request
from s3.amazon.com/bucketname/...

This is useful for S3 compatible APIs which don't support the virtual host style
request.

It is enabled with the new option, `use_path_style_request`.

Example:

    /usr/bin/s3fs data ~/netcdf -o url="https://swift.rc.nectar.org.au:8888/" -o use_path_request_style -o allow_other -o uid=500 -o gid=500
2014-06-04 00:03:49 +10:00
f141bbd4b4 Merge pull request #36 from s3fs-fuse/gc#417
Changed codes for CR code in passwd file(googlecode issue#417).
2014-06-03 01:16:03 +09:00
61020370d5 Changed codes for CR code in passwd file(googlecode issue#417). 2014-06-02 16:12:55 +00:00
f1f7e76be5 Merge pull request #35 from s3fs-fuse/cryptlibs
Supports more two SSL libraries for NSS and GnuTLS.
2014-06-01 23:15:59 +09:00
160196798b Changed initializing logic for nss lib/openssl lib/s3fs own. 2014-06-01 03:54:02 +00:00
edad91186f Changed configuration switch from 'enable' to 'with' for libs 2014-05-10 16:45:46 +00:00
cd27f0aa54 Supported another crypt libraries as GnuTLS and NSS, and added configure options 2014-05-06 14:23:05 +00:00
37 changed files with 4863 additions and 1149 deletions

45
.gitignore vendored
View File

@ -1,21 +1,26 @@
*.o
Makefile
Makefile.in
aclocal.m4
autom4te.cache/
config.guess
config.log
config.status
config.sub
configure
depcomp
doc/Makefile
doc/Makefile.in
install-sh
missing
src/.deps/
src/Makefile
src/Makefile.in
src/s3fs
test/Makefile
test/Makefile.in
/Makefile
/Makefile.in
/aclocal.m4
/autom4te.cache/
/config.guess
/config.log
/config.status
/config.sub
/configure
/depcomp
/test-driver
/compile
/doc/Makefile
/doc/Makefile.in
/install-sh
/missing
/src/.deps/
/src/Makefile
/src/Makefile.in
/src/s3fs
/src/test_*
/test/.deps/
/test/Makefile
/test/Makefile.in
/test/*.log

7
.mailmap Normal file
View File

@ -0,0 +1,7 @@
Adrian Petrescu <apetresc@df820570-a93a-0410-bd06-b72b767a4274>
Adrian Petrescu <apetresc@gmail.com@df820570-a93a-0410-bd06-b72b767a4274>
Ben Lemasurier <ben.lemasurier@gmail.com@df820570-a93a-0410-bd06-b72b767a4274>
Dan Moore <mooredan@suncup.net@df820570-a93a-0410-bd06-b72b767a4274>
Randy Rizun <rrizun@df820570-a93a-0410-bd06-b72b767a4274>
Randy Rizun <rrizun@rrizun-ThinkPad-T530.(none)>
Takeshi Nakatani <ggtakec@gmail.com@df820570-a93a-0410-bd06-b72b767a4274>

17
.travis.yml Normal file
View File

@ -0,0 +1,17 @@
language: cpp
dist: trusty
cache: apt
before_install:
- sudo apt-get update -qq
- sudo apt-get install -qq libfuse-dev
script:
- ./autogen.sh
- ./configure
- make
- make check -C src
# Travis granted s3fs access to their upcoming alpha testing stack which may
# allow us to use FUSE.
# TODO: Travis changed their infrastructure some time in June 2015 such that
# this does not work currently
#- modprobe fuse
#- make check -C test

View File

@ -1,6 +1,70 @@
ChangeLog for S3FS
------------------
Version 1.79 -- Jul 19, 2015
issue #60 - Emit user-friendly log messages on failed CheckBucket requests
issue #62 - Remove stray chars from source files
issue #63 - Fix spelling errors
issue #68 - FreeBSD issue
issue #69 - Address clang always true warnings
issue #73 - Small gitignore fixes
issue #74 - url: handle scheme omission
issue #83 - Changed option processing to use strtol() to get a umask
issue #93 - Add simple unit tests for trim functions
issue #100 - CURL handles not properly initialized to use DNS or SSL session caching
issue #101 - Optimized function "bool directory_empty()"
issue #103 - Remove prefix option in s3fs man page - issue#87
issue #104 - fix rename before close
issue #116 - Supported signature version 4
issue #119 - Added new mp_umask option about issue#107, pr#110
issue #124 - Fallback to v2 signatures correctly.
issue #130 - refactor integration tests create/cleanup file
issue #131 - Test ls
issue #132 - Use S3Proxy to run integration tests
issue #134 - Include Content-Type in complete MPU V2 signature
issue #135 - Correct V4 signature for initiate multipart upload
issue #136 - Small fixes to integration tests
issue #137 - Add test for multi-part upload
issue #138 - Fixed bugs, not turn use_cache off and ty to load to end - issue#97
issue #143 - Fixed a bug no use_cache case about fixed #138 - issue#141
issue #144 - Add Travis configuration
issue #146 - add exit handler to cleanup on failures
issue #147 - Use S3Proxy 1.4.0-SNAPSHOT
issue #150 - Fixed a bug not handling fsync - #145
issue #154 - Fixed url-encoding for ampersand etc on sigv4 - Improvement/#149
issue #155 - Fixed a bug: unable to mount bucket subdirectory
issue #156 - Fixed a bug about ssl session sharing with libcurl older 7.23.0 - issue#126
issue #159 - Upgrade to S3Proxy 1.4.0
issue #164 - send the correct Host header when using -o url
issue #165 - Auth v4 refactor
issue #167 - Increased default connecting/reading/writing timeout value
issue #168 - switch to use region specific endpoints to compute correct v4 signature
issue #170 - Reviewed and fixed response codes print in curl.cpp - #157
issue #171 - Support buckets with mixed-case names
issue #173 - Run integration tests via Travis
issue #176 - configure.ac: detect target, if target is darwin (OSX), then #176
issue #177 - Add .mailmap
issue #178 - Update .gitignore
issue #184 - Add usage information for multipart_size
issue #185 - Correct obvious typos in usage and README
issue #190 - Add a no_check_certificate option.
issue #194 - Tilda in a file-name breaks things (EPERM)
issue #198 - Disasble integration tests for Travis
issue #199 - Supported extended attributes(retry)
issue #200 - fixed fallback to sigv2 for bucket create and GCS
issue #202 - Specialize {set,get}xattr for OS X
issue #204 - Add integration test for xattr
issue #207 - Fixed a few small spelling issues.
Version 1.78 -- Sep 15, 2014
issue #29 - Possible to create Debian/Ubuntu packages?(googlecode issue 109)
issue 417(googlecode) - Password file with DOS format is not handled properly
issue #41 - Failed making signature
issue #40 - Moving a directory containing more than 1000 files truncates the directory
issue #49 - use_sse is ignored when creating new files
issue #39 - Support for SSE-C
issue #50 - Cannot find pkg-config when configured with any SSL backend except openssl
Version 1.77 -- Apr 19, 2014
issue 405(googlecode) - enable_content_md5 Input/output error
issue #14 - s3fs -u should return 0 if there are no lost multiparts

View File

@ -1,3 +1,22 @@
######################################################################
# s3fs - FUSE-based file system backed by Amazon S3
#
# Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
######################################################################
SUBDIRS=src test doc
EXTRA_DIST=doc
@ -8,3 +27,4 @@ dist-hook:
release : dist ../utils/release.sh
../utils/release.sh $(DIST_ARCHIVES)

6
README
View File

@ -14,6 +14,8 @@ In order to compile s3fs, You'll need the following requirements:
* FUSE (>= 2.8.4)
* FUSE Kernel module installed and running (RHEL 4.x/CentOS 4.x users - read below)
* OpenSSL-devel (0.9.8)
GnuTLS(gcrypt and nettle)
NSS
* Git
If you're using YUM or APT to install those packages, then it might require additional packaging, allow it to be installed.
@ -28,7 +30,7 @@ git clone git://github.com/s3fs-fuse/s3fs-fuse.git
Go inside the directory that has been created (s3fs-fuse) and run: ./autogen.sh
This will generate a number of scripts in the project directory, including a configure script which you should run with: ./configure
If configure succeeded, you can now run: make. If it didn't, make sure you meet the dependencies above.
This should compile the code. If everything goes OK, you'll be greated with "ok!" at the end and you'll have a binary file called "s3fs"
This should compile the code. If everything goes OK, you'll be greeted with "ok!" at the end and you'll have a binary file called "s3fs"
in the src/ directory.
As root (you can use su, su -, sudo) do: "make install" -this will copy the "s3fs" binary to /usr/local/bin.
@ -59,7 +61,7 @@ Known Issues:
-------------
s3fs should be working fine with S3 storage. However, There are couple of limitations:
* Currently s3fs could hang the CPU if you have lots of time-outs. This is *NOT* a fault of s3fs but rather libcurl. This happends when you try to copy thousands of files in 1 session, it doesn't happend when you upload hundreds of files or less.
* Currently s3fs could hang the CPU if you have lots of time-outs. This is *NOT* a fault of s3fs but rather libcurl. This happens when you try to copy thousands of files in 1 session, it doesn't happen when you upload hundreds of files or less.
* CentOS 4.x/RHEL 4.x users - if you use the kernel that shipped with your distribution and didn't upgrade to the latest kernel RedHat/CentOS gives, you might have a problem loading the "fuse" kernel. Please upgrade to the latest kernel (2.6.16 or above) and make sure "fuse" kernel module is compiled and loadable since FUSE requires this kernel module and s3fs requires it as well.
* Moving/renaming/erasing files takes time since the whole file needs to be accessed first. A workaround could be to use s3fs's cache support with the use_cache option.

View File

@ -20,87 +20,210 @@
dnl Process this file with autoconf to produce a configure script.
AC_PREREQ(2.59)
AC_INIT(s3fs, 1.77)
AC_INIT(s3fs, 1.79)
AC_CANONICAL_SYSTEM
AM_INIT_AUTOMAKE()
AC_PROG_CXX
AC_PROG_CC
CXXFLAGS="$CXXFLAGS -Wall -D_FILE_OFFSET_BITS=64"
PKG_CHECK_MODULES([DEPS], [fuse >= 2.8.4 libcurl >= 7.0 libxml-2.0 >= 2.6 libcrypto >= 0.9])
case "$target" in
*-darwin* )
# Do something specific for mac
min_fuse_version=2.7.3
;;
*)
# Default Case
# assume other supported linux system
min_fuse_version=2.8.4
;;
esac
dnl malloc_trim function
AC_CHECK_FUNCS(malloc_trim, , )
dnl ----------------------------------------------
dnl Choice SSL library
dnl ----------------------------------------------
auth_lib=na
nettle_lib=no
dnl Initializing NSS(temporally)
AC_MSG_CHECKING([Initializing libcurl build with NSS])
AC_ARG_ENABLE(
nss-init,
dnl
dnl nettle library
dnl
AC_MSG_CHECKING([s3fs build with nettle(GnuTLS)])
AC_ARG_WITH(
nettle,
[AS_HELP_STRING([--with-nettle], [s3fs build with nettle in GnuTLS(default no)])],
[
AS_HELP_STRING(
[--enable-nss-init],
[Inilializing libcurl with NSS (default is no)]
)
],
[
case "${enableval}" in
case "${withval}" in
yes)
AC_MSG_RESULT(yes)
nss_init_enabled=yes
nettle_lib=yes
;;
*)
AC_MSG_RESULT(no)
;;
esac
],
[
AC_MSG_RESULT(no)
])
dnl
dnl use openssl library for ssl
dnl
AC_MSG_CHECKING([s3fs build with OpenSSL])
AC_ARG_WITH(
openssl,
[AS_HELP_STRING([--with-openssl], [s3fs build with OpenSSL(default is no)])],
[
case "${withval}" in
yes)
AC_MSG_RESULT(yes)
AS_IF(
[test $nettle_lib = no],
[auth_lib=openssl],
[AC_MSG_ERROR([could not set openssl with nettle, nettle is only for gnutls library])])
;;
*)
AC_MSG_RESULT(no)
;;
esac
],
[
AC_MSG_RESULT(no)
])
dnl
dnl use GnuTLS library for ssl
dnl
AC_MSG_CHECKING([s3fs build with GnuTLS])
AC_ARG_WITH(
gnutls,
[AS_HELP_STRING([--with-gnutls], [s3fs build with GnuTLS(default is no)])],
[
case "${withval}" in
yes)
AC_MSG_RESULT(yes)
AS_IF(
[test $auth_lib = na],
[
AS_IF(
[test $nettle_lib = no],
[auth_lib=gnutls],
[auth_lib=nettle])
],
[AC_MSG_ERROR([could not set gnutls because already set another ssl library])])
;;
*)
AC_MSG_RESULT(no)
;;
esac
],
[
AC_MSG_RESULT(no)
])
dnl
dnl use nss library for ssl
dnl
AC_MSG_CHECKING([s3fs build with NSS])
AC_ARG_WITH(
nss,
[AS_HELP_STRING([--with-nss], [s3fs build with NSS(default is no)])],
[
case "${withval}" in
yes)
AC_MSG_RESULT(yes)
AS_IF(
[test $auth_lib = na],
[
AS_IF(
[test $nettle_lib = no],
[auth_lib=nss],
[AC_MSG_ERROR([could not set openssl with nettle, nettle is only for gnutls library])])
],
[AC_MSG_ERROR([could not set nss because already set another ssl library])])
;;
*)
AC_MSG_RESULT(no)
nss_init_enabled=no
;;
esac
],
[
AC_MSG_RESULT(no)
nss_init_enabled=no
])
AS_IF(
[test $nss_init_enabled = yes],
[
AC_DEFINE(NSS_INIT_ENABLED, 1)
AC_CHECK_LIB(nss3, NSS_NoDB_Init, , [AC_MSG_ERROR(not found NSS libraries)])
AC_CHECK_LIB(plds4, PL_ArenaFinish, , [AC_MSG_ERROR(not found PL_ArenaFinish)])
AC_CHECK_LIB(nspr4, PR_Cleanup, , [AC_MSG_ERROR(not found PR_Cleanup)])
AC_CHECK_HEADER(nss.h, , [AC_MSG_ERROR(not found nss.h)])
AC_CHECK_HEADER(nspr4/prinit.h, , [AC_MSG_ERROR(not found prinit.h)])
AC_PATH_PROG(NSSCONFIG, [nss-config], no)
AS_IF(
[test $NSSCONFIG = no],
[
DEPS_CFLAGS="$DEPS_CFLAGS -I/usr/include/nss3"
DEPS_LIBS="$DEPS_LIBS -lnss3"
],
[
addcflags=`nss-config --cflags`
DEPS_CFLAGS="$DEPS_CFLAGS $addcflags"
dnl addlib=`nss-config --libs`
dnl DEPS_LIBS="$DEPS_LIBS $addlib"
DEPS_LIBS="$DEPS_LIBS -lnss3"
])
AC_PATH_PROG(NSPRCONFIG, [nspr-config], no)
AS_IF(
[test $NSPRCONFIG = no],
[
DEPS_CFLAGS="$DEPS_CFLAGS -I/usr/include/nspr4"
DEPS_LIBS="$DEPS_LIBS -lnspr4 -lplds4"
],
[
addcflags=`nspr-config --cflags`
DEPS_CFLAGS="$DEPS_CFLAGS $addcflags"
dnl addlib=`nspr-config --libs`
dnl DEPS_LIBS="$DEPS_LIBS $addlib"
DEPS_LIBS="$DEPS_LIBS -lnspr4 -lplds4"
])
])
[test $auth_lib = na],
AS_IF(
[test $nettle_lib = no],
[auth_lib=openssl],
[AC_MSG_ERROR([could not set nettle without GnuTLS library])]
)
)
AS_UNSET(nss_enabled)
dnl
dnl For PKG_CONFIG before checking nss/gnutls.
dnl this is redundant checking, but we need checking before following.
dnl
PKG_CHECK_MODULES([common_lib_checking], [fuse >= ${min_fuse_version} libcurl >= 7.0 libxml-2.0 >= 2.6])
AC_MSG_CHECKING([compile s3fs with])
case "${auth_lib}" in
openssl)
AC_MSG_RESULT(OpenSSL)
PKG_CHECK_MODULES([DEPS], [fuse >= ${min_fuse_version} libcurl >= 7.0 libxml-2.0 >= 2.6 libcrypto >= 0.9])
;;
gnutls)
AC_MSG_RESULT(GnuTLS-gcrypt)
gnutls_nettle=""
AC_CHECK_LIB(gnutls, gcry_control, [gnutls_nettle=0])
AS_IF([test "$gnutls_nettle" = ""], [AC_CHECK_LIB(gcrypt, gcry_control, [gnutls_nettle=0])])
AS_IF([test $gnutls_nettle = 0],
[
PKG_CHECK_MODULES([DEPS], [fuse >= ${min_fuse_version} libcurl >= 7.0 libxml-2.0 >= 2.6 gnutls >= 2.12.0 ])
LIBS="-lgnutls -lgcrypt $LIBS"
AC_MSG_CHECKING([gnutls is build with])
AC_MSG_RESULT(gcrypt)
],
[AC_MSG_ERROR([GnuTLS found, but gcrypt not found])])
;;
nettle)
AC_MSG_RESULT(GnuTLS-nettle)
gnutls_nettle=""
AC_CHECK_LIB(gnutls, nettle_MD5Init, [gnutls_nettle=1])
AS_IF([test "$gnutls_nettle" = ""], [AC_CHECK_LIB(nettle, nettle_MD5Init, [gnutls_nettle=1])])
AS_IF([test $gnutls_nettle = 1],
[
PKG_CHECK_MODULES([DEPS], [fuse >= ${min_fuse_version} libcurl >= 7.0 libxml-2.0 >= 2.6 nettle >= 2.7.1 ])
LIBS="-lgnutls -lnettle $LIBS"
AC_MSG_CHECKING([gnutls is build with])
AC_MSG_RESULT(nettle)
],
[AC_MSG_ERROR([GnuTLS found, but nettle not found])])
;;
nss)
AC_MSG_RESULT(NSS)
PKG_CHECK_MODULES([DEPS], [fuse >= ${min_fuse_version} libcurl >= 7.0 libxml-2.0 >= 2.6 nss >= 3.15.0 ])
;;
*)
AC_MSG_ERROR([unknown ssl library type.])
;;
esac
AM_CONDITIONAL([USE_SSL_OPENSSL], [test "$auth_lib" = openssl])
AM_CONDITIONAL([USE_SSL_GNUTLS], [test "$auth_lib" = gnutls -o "$auth_lib" = nettle])
AM_CONDITIONAL([USE_GNUTLS_NETTLE], [test "$auth_lib" = nettle])
AM_CONDITIONAL([USE_SSL_NSS], [test "$auth_lib" = nss])
dnl ----------------------------------------------
dnl end of ssl library
dnl ----------------------------------------------
dnl malloc_trim function
AC_CHECK_FUNCS(malloc_trim, , )
AC_CONFIG_FILES(Makefile src/Makefile test/Makefile doc/Makefile)
AC_OUTPUT

View File

@ -1 +1,21 @@
######################################################################
# s3fs - FUSE-based file system backed by Amazon S3
#
# Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
######################################################################
dist_man1_MANS = man/s3fs.1

View File

@ -10,7 +10,7 @@ S3FS \- FUSE-based file system backed by Amazon S3
\fBumount mountpoint
.SS utility mode ( remove interrupted multipart uploading objects )
.TP
\fBs3fs -u bucket
\fBs3fs \-u bucket
.SH DESCRIPTION
s3fs is a FUSE filesystem that allows you to mount an Amazon S3 bucket as a local filesystem. It stores files natively and transparently in S3 (i.e., you can use other programs to access the same files).
.SH AUTHENTICATION
@ -53,9 +53,6 @@ the default canned acl to apply to all written S3 objects, e.g., "public-read".
Any created files will have this canned acl.
Any updated files will also have this canned acl applied!
.TP
\fB\-o\fR prefix (default="") (coming soon!)
a prefix to append to all S3 objects.
.TP
\fB\-o\fR retries (default="2")
number of times to retry a failed S3 transaction.
.TP
@ -71,9 +68,13 @@ this option can not be specified with use_sse.
(can specify use_rrs=1 for old version)
.TP
\fB\-o\fR use_sse (default is disable)
use Amazon's Server Site Encryption.
this option can not be specified with use_rrs.
(can specify use_sse=1 for old version)
use Amazon's Server-Site Encryption or Server-Side Encryption with Customer-Provided Encryption Keys.
this option can not be specified with use_rrs. specifying only "use_sse" or "use_sse=1" enables Server-Side Encryption.(use_sse=1 for old version)
specifying this option with file path which has some SSE-C secret key enables Server-Side Encryption with Customer-Provided Encryption Keys.(use_sse=file)
the file must be 600 permission. the file can have some lines, each line is one SSE-C key. the first line in file is used as Customer-Provided Encryption Keys for uploading and change headers etc.
if there are some keys after first line, those are used downloading object which are encripted by not first key.
so that, you can keep all SSE-C keys in file, that is SSE-C key history.
if AWSSSECKEYS environment is set, you can set SSE-C key instead of this option.
.TP
\fB\-o\fR passwd_file (default="")
specify the path to the password file, which which takes precedence over the password in $HOME/.passwd-s3fs and /etc/passwd-s3fs
@ -99,10 +100,10 @@ If you specify this option for set "Content-Encoding" HTTP header, please take c
\fB\-o\fR public_bucket (default="" which means disabled)
anonymously mount a public bucket when set to 1, ignores the $HOME/.passwd-s3fs and /etc/passwd-s3fs files.
.TP
\fB\-o\fR connect_timeout (default="10" seconds)
\fB\-o\fR connect_timeout (default="300" seconds)
time to wait for connection before giving up.
.TP
\fB\-o\fR readwrite_timeout (default="30" seconds)
\fB\-o\fR readwrite_timeout (default="60" seconds)
time to wait between read/write activity before giving up.
.TP
\fB\-o\fR max_stat_cache_size (default="1000" entries (about 4MB))
@ -117,6 +118,9 @@ s3fs always has to check whether file(or sub directory) exists under object(path
It increases ListBucket request and makes performance bad.
You can specify this option for performance, s3fs memorizes in stat cache that the object(file or directory) does not exist.
.TP
\fB\-o\fR no_check_certificate (by default this option is disabled) - do not check ssl certificate.
server certificate won't be checked against the available certificate authorities.
.TP
\fB\-o\fR nodnscache - disable dns cache.
s3fs is always using dns cache, this option make dns cache disable.
.TP
@ -134,7 +138,7 @@ It is necessary to set this value depending on a CPU and a network band.
This option is lated to fd_page_size option and affects it.
.TP
\fB\-o\fR fd_page_size(default="52428800"(50MB))
number of internal management page size for each file discriptor.
number of internal management page size for each file descriptor.
For delayed reading and writing by s3fs, s3fs manages pages which is separated from object. Each pages has a status that data is already loaded(or not loaded yet).
This option should not be changed when you don't have a trouble with performance.
This value is changed automatically by parallel_count and multipart_size values(fd_page_size value = parallel_count * multipart_size).
@ -148,6 +152,22 @@ This option is lated to fd_page_size option and affects it.
\fB\-o\fR url (default="http://s3.amazonaws.com")
sets the url to use to access Amazon S3. If you want to use HTTPS, then you can set url=https://s3.amazonaws.com
.TP
\fB\-o\fR endpoint (default="us-east-1")
sets the endpoint to use.
If this option is not specified, s3fs uses \"us-east-1\" region as the default.
If the s3fs could not connect to the region specified by this option, s3fs could not run.
But if you do not specify this option, and if you can not connect with the default region, s3fs will retry to automatically connect to the other region.
So s3fs can know the correct region name, because s3fs can find it in an error from the S3 server.
.TP
\fB\-o\fR sigv2 (default is signature version 4)
sets signing AWS requests by sing Signature Version 2.
.TP
\fB\-o\fR mp_umask (default is "0000")
sets umask for the mount point directory.
If allow_other option is not set, s3fs allows access to the mount point only to the owner.
In the opposite case s3fs allows access to all users as the default.
But if you set the allow_other with this option, you can controll the permission permissions of the mount point by this option like umask.
.TP
\fB\-o\fR nomultipart - disable multipart uploads
.TP
\fB\-o\fR enable_content_md5 ( default is disable )
@ -171,9 +191,12 @@ If you set this option, s3fs do not use PUT with "x-amz-copy-source"(copy api).
For a distributed object storage which is compatibility S3 API without PUT(copy api).
This option is a subset of nocopyapi option. The nocopyapi option does not use copy-api for all command(ex. chmod, chown, touch, mv, etc), but this option does not use copy-api for only rename command(ex. mv).
If this option is specified with nocopapi, the s3fs ignores it.
.TP
\fB\-o\fR use_path_request_style (use legacy API calling style)
Enble compatibility with S3-like APIs which do not support the virtual-host request style, by using the older path request style.
.SH FUSE/MOUNT OPTIONS
.TP
Most of the generic mount options described in 'man mount' are supported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime, noatime, sync async, dirsync). Filesystems are mounted with '-onodev,nosuid' by default, which can only be overridden by a privileged user.
Most of the generic mount options described in 'man mount' are supported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime, noatime, sync async, dirsync). Filesystems are mounted with '\-onodev,nosuid' by default, which can only be overridden by a privileged user.
.TP
There are many FUSE specific mount options that can be specified. e.g. allow_other. See the FUSE README for the full set.
.SH NOTES

View File

@ -1,7 +1,44 @@
######################################################################
# s3fs - FUSE-based file system backed by Amazon S3
#
# Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
######################################################################
bin_PROGRAMS=s3fs
AM_CPPFLAGS = $(DEPS_CFLAGS)
if USE_GNUTLS_NETTLE
AM_CPPFLAGS += -DUSE_GNUTLS_NETTLE
endif
s3fs_SOURCES = s3fs.cpp s3fs.h curl.cpp curl.h cache.cpp cache.h string_util.cpp string_util.h s3fs_util.cpp s3fs_util.h fdcache.cpp fdcache.h common_auth.cpp s3fs_auth.h common.h
if USE_SSL_OPENSSL
s3fs_SOURCES += openssl_auth.cpp
endif
if USE_SSL_GNUTLS
s3fs_SOURCES += gnutls_auth.cpp
endif
if USE_SSL_NSS
s3fs_SOURCES += nss_auth.cpp
endif
s3fs_SOURCES = s3fs.cpp s3fs.h curl.cpp curl.h cache.cpp cache.h string_util.cpp string_util.h s3fs_util.cpp s3fs_util.h fdcache.cpp fdcache.h common.h
s3fs_LDADD = $(DEPS_LIBS)
noinst_PROGRAMS = test_string_util
test_string_util_SOURCES = string_util.cpp test_string_util.cpp
TESTS = test_string_util

View File

@ -35,6 +35,7 @@
#include "cache.h"
#include "s3fs.h"
#include "s3fs_util.h"
#include "string_util.h"
using namespace std;
@ -269,24 +270,18 @@ bool StatCache::AddStat(std::string& key, headers_t& meta, bool forcedir)
ent->meta.clear();
//copy only some keys
for(headers_t::iterator iter = meta.begin(); iter != meta.end(); ++iter){
string tag = (*iter).first;
string value = (*iter).second;
if(tag == "Content-Type"){
ent->meta[tag] = value;
}else if(tag == "Content-Length"){
ent->meta[tag] = value;
}else if(tag == "ETag"){
ent->meta[tag] = value;
}else if(tag == "Last-Modified"){
ent->meta[tag] = value;
string tag = lower(iter->first);
string value = iter->second;
if(tag == "content-type"){
ent->meta[iter->first] = value;
}else if(tag == "content-length"){
ent->meta[iter->first] = value;
}else if(tag == "etag"){
ent->meta[iter->first] = value;
}else if(tag == "last-modified"){
ent->meta[iter->first] = value;
}else if(tag.substr(0, 5) == "x-amz"){
ent->meta[tag] = value;
}else{
// Check for upper case
transform(tag.begin(), tag.end(), tag.begin(), static_cast<int (*)(int)>(std::tolower));
if(tag.substr(0, 5) == "x-amz"){
ent->meta[tag] = value;
}
ent->meta[tag] = value; // key is lower case for "x-amz"
}
}
// add
@ -440,3 +435,11 @@ bool convert_header_to_stat(const char* path, headers_t& meta, struct stat* pst,
return true;
}
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

View File

@ -1,3 +1,22 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef S3FS_CACHE_H_
#define S3FS_CACHE_H_
@ -105,3 +124,12 @@ class StatCache
bool convert_header_to_stat(const char* path, headers_t& meta, struct stat* pst, bool forcedir = false);
#endif // S3FS_CACHE_H_
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

View File

@ -1,3 +1,22 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef S3FS_COMMON_H_
#define S3FS_COMMON_H_
@ -62,6 +81,26 @@
//
typedef std::map<std::string, std::string> headers_t;
//
// Header "x-amz-meta-xattr" is for extended attributes.
// This header is url encoded string which is json formated.
// x-amz-meta-xattr:urlencod({"xattr-1":"base64(value-1)","xattr-2":"base64(value-2)","xattr-3":"base64(value-3)"})
//
typedef struct xattr_value{
unsigned char* pvalue;
size_t length;
xattr_value(unsigned char* pval = NULL, size_t len = 0) : pvalue(pval), length(len) {}
~xattr_value()
{
if(pvalue){
free(pvalue);
}
}
}XATTRVAL, *PXATTRVAL;
typedef std::map<std::string, PXATTRVAL> xattrs_t;
//
// Global valiables
//
@ -69,10 +108,21 @@ extern bool debug;
extern bool foreground;
extern bool foreground2;
extern bool nomultipart;
extern bool pathrequeststyle;
extern std::string program_name;
extern std::string service_path;
extern std::string host;
extern std::string bucket;
extern std::string mount_prefix;
extern std::string endpoint;
#endif // S3FS_COMMON_H_
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

188
src/common_auth.cpp Normal file
View File

@ -0,0 +1,188 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <string>
#include "s3fs_auth.h"
using namespace std;
//-------------------------------------------------------------------
// Utility Function
//-------------------------------------------------------------------
char* s3fs_base64(const unsigned char* input, size_t length)
{
static const char* base = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
char* result;
if(!input || 0 >= length){
return NULL;
}
if(NULL == (result = (char*)malloc((((length / 3) + 1) * 4 + 1) * sizeof(char)))){
return NULL; // ENOMEM
}
unsigned char parts[4];
size_t rpos;
size_t wpos;
for(rpos = 0, wpos = 0; rpos < length; rpos += 3){
parts[0] = (input[rpos] & 0xfc) >> 2;
parts[1] = ((input[rpos] & 0x03) << 4) | ((((rpos + 1) < length ? input[rpos + 1] : 0x00) & 0xf0) >> 4);
parts[2] = (rpos + 1) < length ? (((input[rpos + 1] & 0x0f) << 2) | ((((rpos + 2) < length ? input[rpos + 2] : 0x00) & 0xc0) >> 6)) : 0x40;
parts[3] = (rpos + 2) < length ? (input[rpos + 2] & 0x3f) : 0x40;
result[wpos++] = base[parts[0]];
result[wpos++] = base[parts[1]];
result[wpos++] = base[parts[2]];
result[wpos++] = base[parts[3]];
}
result[wpos] = '\0';
return result;
}
inline unsigned char char_decode64(const char ch)
{
unsigned char by;
if('A' <= ch && ch <= 'Z'){ // A - Z
by = static_cast<unsigned char>(ch - 'A');
}else if('a' <= ch && ch <= 'z'){ // a - z
by = static_cast<unsigned char>(ch - 'a' + 26);
}else if('0' <= ch && ch <= '9'){ // 0 - 9
by = static_cast<unsigned char>(ch - '0' + 52);
}else if('+' == ch){ // +
by = 62;
}else if('/' == ch){ // /
by = 63;
}else if('=' == ch){ // =
by = 64;
}else{ // something wrong
by = 64;
}
return by;
}
unsigned char* s3fs_decode64(const char* input, size_t* plength)
{
unsigned char* result;
if(!input || 0 == strlen(input) || !plength){
return NULL;
}
if(NULL == (result = (unsigned char*)malloc((strlen(input) + 1)))){
return NULL; // ENOMEM
}
unsigned char parts[4];
size_t input_len = strlen(input);
size_t rpos;
size_t wpos;
for(rpos = 0, wpos = 0; rpos < input_len; rpos += 4){
parts[0] = char_decode64(input[rpos]);
parts[1] = (rpos + 1) < input_len ? char_decode64(input[rpos + 1]) : 64;
parts[2] = (rpos + 2) < input_len ? char_decode64(input[rpos + 2]) : 64;
parts[3] = (rpos + 3) < input_len ? char_decode64(input[rpos + 3]) : 64;
result[wpos++] = ((parts[0] << 2) & 0xfc) | ((parts[1] >> 4) & 0x03);
if(64 == parts[2]){
break;
}
result[wpos++] = ((parts[1] << 4) & 0xf0) | ((parts[2] >> 2) & 0x0f);
if(64 == parts[3]){
break;
}
result[wpos++] = ((parts[2] << 6) & 0xc0) | (parts[3] & 0x3f);
}
*plength = wpos;
return result;
}
string s3fs_get_content_md5(int fd)
{
unsigned char* md5hex;
char* base64;
string Signature;
if(NULL == (md5hex = s3fs_md5hexsum(fd, 0, -1))){
return string("");
}
if(NULL == (base64 = s3fs_base64(md5hex, get_md5_digest_length()))){
return string(""); // ENOMEM
}
free(md5hex);
Signature = base64;
free(base64);
return Signature;
}
string s3fs_md5sum(int fd, off_t start, ssize_t size)
{
size_t digestlen = get_md5_digest_length();
char md5[2 * digestlen + 1];
char hexbuf[3];
unsigned char* md5hex;
if(NULL == (md5hex = s3fs_md5hexsum(fd, start, size))){
return string("");
}
memset(md5, 0, 2 * digestlen + 1);
for(size_t pos = 0; pos < digestlen; pos++){
snprintf(hexbuf, 3, "%02x", md5hex[pos]);
strncat(md5, hexbuf, 2);
}
free(md5hex);
return string(md5);
}
string s3fs_sha256sum(int fd, off_t start, ssize_t size)
{
size_t digestlen = get_sha256_digest_length();
char sha256[2 * digestlen + 1];
char hexbuf[3];
unsigned char* sha256hex;
if(NULL == (sha256hex = s3fs_sha256hexsum(fd, start, size))){
return string("");
}
memset(sha256, 0, 2 * digestlen + 1);
for(size_t pos = 0; pos < digestlen; pos++){
snprintf(hexbuf, 3, "%02x", sha256hex[pos]);
strncat(sha256, hexbuf, 2);
}
free(sha256hex);
return string(sha256);
}
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,22 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef S3FS_CURL_H_
#define S3FS_CURL_H_
@ -46,7 +65,7 @@ struct filepart
{
bool uploaded; // does finish uploading
std::string etag; // expected etag value
int fd; // base file(temporary full file) discriptor
int fd; // base file(temporary full file) descriptor
off_t startpos; // seek fd point for uploading
ssize_t size; // uploading size
etaglist_t* etaglist; // use only parallel upload
@ -100,18 +119,14 @@ class S3fsMultiCurl;
// class S3fsCurl
//----------------------------------------------
typedef std::map<std::string, std::string> iamcredmap_t;
typedef std::map<std::string, std::string> sseckeymap_t;
typedef std::list<sseckeymap_t> sseckeylist_t;
// share
#define SHARE_MUTEX_DNS 0
#define SHARE_MUTEX_SSL_SESSION 1
#define SHARE_MUTEX_MAX 2
// internal use struct for openssl
struct CRYPTO_dynlock_value
{
pthread_mutex_t dyn_mutex;
};
// Class for lapping curl
//
class S3fsCurl
@ -140,9 +155,9 @@ class S3fsCurl
// class variables
static pthread_mutex_t curl_handles_lock;
static pthread_mutex_t curl_share_lock[SHARE_MUTEX_MAX];
static pthread_mutex_t* crypt_mutex;
static bool is_initglobal_done;
static CURLSH* hCurlShare;
static bool is_cert_check;
static bool is_dns_cache;
static bool is_ssl_session_cache;
static long connect_timeout;
@ -151,6 +166,7 @@ class S3fsCurl
static bool is_public_bucket;
static std::string default_acl; // TODO: to enum
static bool is_use_rrs;
static sseckeylist_t sseckeys;
static bool is_use_sse;
static bool is_content_md5;
static bool is_verbose;
@ -160,13 +176,13 @@ class S3fsCurl
static time_t AWSAccessTokenExpire;
static std::string IAM_role;
static long ssl_verify_hostname;
static const EVP_MD* evp_md;
static curltime_t curl_times;
static curlprogress_t curl_progress;
static std::string curl_ca_bundle;
static mimes_t mimeTypes;
static int max_parallel_cnt;
static off_t multipart_size;
static bool is_sigv4;
// variables
CURL* hCurl;
@ -190,6 +206,8 @@ class S3fsCurl
int b_postdata_remaining; // backup for retrying
off_t b_partdata_startpos; // backup for retrying
ssize_t b_partdata_size; // backup for retrying
bool b_ssekey_pos; // backup for retrying
std::string b_ssekey_md5; // backup for retrying
public:
// constructor/destructor
@ -206,11 +224,6 @@ class S3fsCurl
static void UnlockCurlShare(CURL* handle, curl_lock_data nLockData, void* useptr);
static bool InitCryptMutex(void);
static bool DestroyCryptMutex(void);
static void CryptMutexLock(int mode, int pos, const char* file, int line);
static unsigned long CryptGetThreadid(void);
static struct CRYPTO_dynlock_value* CreateDynCryptMutex(const char* file, int line);
static void DynCryptMutexLock(int mode, struct CRYPTO_dynlock_value* dyndata, const char* file, int line);
static void DestoryDynCryptMutex(struct CRYPTO_dynlock_value* dyndata, const char* file, int line);
static int CurlProgress(void *clientp, double dltotal, double dlnow, double ultotal, double ulnow);
static bool InitMimeType(const char* MimeFile = NULL);
@ -227,16 +240,19 @@ class S3fsCurl
static bool ParseIAMCredentialResponse(const char* response, iamcredmap_t& keyval);
static bool SetIAMCredentials(const char* response);
static bool PushbackSseKeys(std::string& onekey);
// methods
bool ResetHandle(void);
bool RemakeHandle(void);
bool ClearInternalData(void);
std::string CalcSignature(std::string method, std::string strMD5, std::string content_type, std::string date, std::string resource);
void insertV4Headers(const std::string &op, const std::string &path, const std::string &query_string, const std::string &payload_hash);
std::string CalcSignatureV2(std::string method, std::string strMD5, std::string content_type, std::string date, std::string resource);
std::string CalcSignature(std::string method, std::string canonical_uri, std::string query_string, std::string strdate, std::string payload_hash, std::string date8601);
bool GetUploadId(std::string& upload_id);
int GetIAMCredentials(void);
int PreMultipartPostRequest(const char* tpath, headers_t& meta, std::string& upload_id, bool ow_sse_flg);
int PreMultipartPostRequest(const char* tpath, headers_t& meta, std::string& upload_id, bool is_copy);
int CompleteMultipartPostRequest(const char* tpath, std::string& upload_id, etaglist_t& parts);
int UploadMultipartPostSetup(const char* tpath, int part_num, std::string& upload_id);
int UploadMultipartPostRequest(const char* tpath, int part_num, std::string& upload_id);
@ -246,12 +262,13 @@ class S3fsCurl
// class methods
static bool InitS3fsCurl(const char* MimeFile = NULL);
static bool DestroyS3fsCurl(void);
static int ParallelMultipartUploadRequest(const char* tpath, headers_t& meta, int fd, bool ow_sse_flg);
static int ParallelMultipartUploadRequest(const char* tpath, headers_t& meta, int fd);
static int ParallelGetObjectRequest(const char* tpath, int fd, off_t start, ssize_t size);
static bool CheckIAMCredentialUpdate(void);
// class methods(valiables)
static std::string LookupMimeType(std::string name);
static bool SetCheckCertificate(bool isCertCheck);
static bool SetDnsCache(bool isCache);
static bool SetSslSessionCache(bool isCache);
static long SetConnectTimeout(long timeout);
@ -263,6 +280,12 @@ class S3fsCurl
static std::string SetDefaultAcl(const char* acl);
static bool SetUseRrs(bool flag);
static bool GetUseRrs(void) { return S3fsCurl::is_use_rrs; }
static bool SetSseKeys(const char* filepath);
static bool LoadEnvSseKeys(void);
static bool GetSseKey(std::string& md5, std::string& ssekey);
static bool GetSseKeyMd5(int pos, std::string& md5);
static int GetSseKeyCount(void);
static bool IsSseCustomMode(void);
static bool SetUseSse(bool flag);
static bool GetUseSse(void) { return S3fsCurl::is_use_sse; }
static bool SetContentMd5(bool flag);
@ -280,29 +303,32 @@ class S3fsCurl
static const char* GetIAMRole(void) { return S3fsCurl::IAM_role.c_str(); }
static bool SetMultipartSize(off_t size);
static off_t GetMultipartSize(void) { return S3fsCurl::multipart_size; }
static bool SetSignatureV4(bool isset) { bool bresult = S3fsCurl::is_sigv4; S3fsCurl::is_sigv4 = isset; return bresult; }
static bool IsSignatureV4(void) { return S3fsCurl::is_sigv4; }
// methods
bool CreateCurlHandle(bool force = false);
bool DestroyCurlHandle(void);
bool AddSseKeyRequestHead(std::string& md5, bool is_copy);
bool GetResponseCode(long& responseCode);
int RequestPerform(void);
int DeleteRequest(const char* tpath);
bool PreHeadRequest(const char* tpath, const char* bpath = NULL, const char* savedpath = NULL);
bool PreHeadRequest(std::string& tpath, std::string& bpath, std::string& savedpath) {
return PreHeadRequest(tpath.c_str(), bpath.c_str(), savedpath.c_str());
bool PreHeadRequest(const char* tpath, const char* bpath = NULL, const char* savedpath = NULL, int ssekey_pos = -1);
bool PreHeadRequest(std::string& tpath, std::string& bpath, std::string& savedpath, int ssekey_pos = -1) {
return PreHeadRequest(tpath.c_str(), bpath.c_str(), savedpath.c_str(), ssekey_pos);
}
int HeadRequest(const char* tpath, headers_t& meta);
int PutHeadRequest(const char* tpath, headers_t& meta, bool ow_sse_flg);
int PutRequest(const char* tpath, headers_t& meta, int fd, bool ow_sse_flg);
int PreGetObjectRequest(const char* tpath, int fd, off_t start, ssize_t size);
int PutHeadRequest(const char* tpath, headers_t& meta, bool is_copy);
int PutRequest(const char* tpath, headers_t& meta, int fd);
int PreGetObjectRequest(const char* tpath, int fd, off_t start, ssize_t size, std::string& ssekeymd5);
int GetObjectRequest(const char* tpath, int fd, off_t start = -1, ssize_t size = -1);
int CheckBucket(void);
int ListBucketRequest(const char* tpath, const char* query);
int MultipartListRequest(std::string& body);
int AbortMultipartUpload(const char* tpath, std::string& upload_id);
int MultipartHeadRequest(const char* tpath, off_t size, headers_t& meta);
int MultipartUploadRequest(const char* tpath, headers_t& meta, int fd, bool ow_sse_flg);
int MultipartHeadRequest(const char* tpath, off_t size, headers_t& meta, bool is_copy);
int MultipartUploadRequest(const char* tpath, headers_t& meta, int fd, bool is_copy);
int MultipartRenameRequest(const char* from, const char* to, headers_t& meta, off_t size);
// methods(valiables)
@ -322,6 +348,7 @@ class S3fsCurl
int GetMultipartRetryCount(void) const { return retry_count; }
void SetMultipartRetryCount(int retrycnt) { retry_count = retrycnt; }
bool IsOverMultipartRetryCount(void) const { return (retry_count >= S3fsCurl::retries); }
int GetLastPreHeadSeecKeyPos(void) const { return b_ssekey_pos; }
};
//----------------------------------------------
@ -331,7 +358,7 @@ class S3fsCurl
//
typedef std::map<CURL*, S3fsCurl*> s3fscurlmap_t;
typedef bool (*S3fsMultiSuccessCallback)(S3fsCurl* s3fscurl); // callback for succeed multi request
typedef S3fsCurl* (*S3fsMultiRetryCallback)(S3fsCurl* s3fscurl); // callback for failuer and retrying
typedef S3fsCurl* (*S3fsMultiRetryCallback)(S3fsCurl* s3fscurl); // callback for failure and retrying
class S3fsMultiCurl
{
@ -401,6 +428,19 @@ std::string GetContentMD5(int fd);
unsigned char* md5hexsum(int fd, off_t start, ssize_t size);
std::string md5sum(int fd, off_t start, ssize_t size);
struct curl_slist* curl_slist_sort_insert(struct curl_slist* list, const char* data);
struct curl_slist* curl_slist_sort_insert(struct curl_slist* list, const char* key, const char* value);
std::string get_sorted_header_keys(const struct curl_slist* list);
std::string get_canonical_headers(const struct curl_slist* list, bool only_amz = false);
bool MakeUrlResource(const char* realpath, std::string& resourcepath, std::string& url);
std::string prepare_url(const char* url);
#endif // S3FS_CURL_H_
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

View File

@ -19,6 +19,7 @@
*/
#include <stdio.h>
#include <stdlib.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <sys/time.h>
@ -31,7 +32,6 @@
#include <string.h>
#include <assert.h>
#include <curl/curl.h>
#include <openssl/crypto.h>
#include <string>
#include <iostream>
#include <sstream>
@ -52,7 +52,7 @@ using namespace std;
// Symbols
//------------------------------------------------
#define MAX_MULTIPART_CNT 10000 // S3 multipart max count
#define FDPAGE_SIZE (50 * 1024 * 1024) // 50MB(parallel uploading is 5 parallel(default) * 10 MB)
#define FDPAGE_SIZE (50 * 1024 * 1024) // 50MB(parallel uploading is 5 parallel(default) * 10 MB)
//------------------------------------------------
// CacheFileStat class methods
@ -345,10 +345,14 @@ bool PageList::FindUninitPage(off_t start, off_t& resstart, size_t& ressize)
return false;
}
int PageList::GetUninitPages(fdpage_list_t& uninit_list, off_t start)
int PageList::GetUninitPages(fdpage_list_t& uninit_list, off_t start, off_t size)
{
for(fdpage_list_t::iterator iter = pages.begin(); iter != pages.end(); iter++){
if(start <= (*iter)->end()){
if((start + size) <= (*iter)->offset){
// reach to end
break;
}
// after start pos
if(!(*iter)->init){
// found uninitialized area
@ -791,13 +795,13 @@ int FdEntity::Load(off_t start, off_t size)
// check loaded area & load
fdpage_list_t uninit_list;
if(0 < pagelist.GetUninitPages(uninit_list, start)){
if(0 < pagelist.GetUninitPages(uninit_list, start, size)){
for(fdpage_list_t::iterator iter = uninit_list.begin(); iter != uninit_list.end(); iter++){
if(-1 != size && (start + size) <= (*iter)->offset){
break;
}
// download
if((*iter)->bytes >= (2 * S3fsCurl::GetMultipartSize()) && !nomultipart){ // default 20MB
if((*iter)->bytes >= static_cast<size_t>(2 * S3fsCurl::GetMultipartSize()) && !nomultipart){ // default 20MB
// parallel request
// Additional time is needed for large files
time_t backup = 0;
@ -856,7 +860,7 @@ bool FdEntity::LoadFull(off_t* size, bool force_load)
return true;
}
int FdEntity::RowFlush(const char* tpath, headers_t& meta, bool ow_sse_flg, bool force_sync)
int FdEntity::RowFlush(const char* tpath, headers_t& meta, bool force_sync)
{
int result;
@ -903,13 +907,13 @@ int FdEntity::RowFlush(const char* tpath, headers_t& meta, bool ow_sse_flg, bool
if(120 > S3fsCurl::GetReadwriteTimeout()){
backup = S3fsCurl::SetReadwriteTimeout(120);
}
result = S3fsCurl::ParallelMultipartUploadRequest(tpath ? tpath : path.c_str(), meta, fd, ow_sse_flg);
result = S3fsCurl::ParallelMultipartUploadRequest(tpath ? tpath : path.c_str(), meta, fd);
if(0 != backup){
S3fsCurl::SetReadwriteTimeout(backup);
}
}else{
S3fsCurl s3fscurl(true);
result = s3fscurl.PutRequest(tpath ? tpath : path.c_str(), meta, fd, ow_sse_flg);
result = s3fscurl.PutRequest(tpath ? tpath : path.c_str(), meta, fd);
}
// seek to head of file.
@ -990,6 +994,24 @@ ssize_t FdEntity::Write(const char* bytes, off_t start, size_t size)
return wsize;
}
//------------------------------------------------
// FdManager symbol
//------------------------------------------------
// [NOTE]
// NOCACHE_PATH_PREFIX symbol needs for not using cache mode.
// Now s3fs I/F functions in s3fs.cpp has left the processing
// to FdManager and FdEntity class. FdManager class manages
// the list of local file stat and file descriptor in conjunction
// with the FdEntity class.
// When s3fs is not using local cache, it means FdManager must
// return new temporary file descriptor at each opening it.
// Then FdManager caches fd by key which is dummy file path
// instead of real file path.
// This process may not be complete, but it is easy way can
// be realized.
//
#define NOCACHE_PATH_PREFIX_FORM " __S3FS_UNEXISTED_PATH_%lx__ / " // important space words for simply
//------------------------------------------------
// FdManager class valiable
//------------------------------------------------
@ -1083,6 +1105,16 @@ bool FdManager::MakeCachePath(const char* path, string& cache_path, bool is_crea
return true;
}
bool FdManager::MakeRandomTempPath(const char* path, string& tmppath)
{
char szBuff[64];
sprintf(szBuff, NOCACHE_PATH_PREFIX_FORM, random()); // warry for performance, but maybe don't warry.
tmppath = szBuff;
tmppath += path ? path : "";
return true;
}
//------------------------------------------------
// FdManager methods
//------------------------------------------------
@ -1123,9 +1155,9 @@ FdManager::~FdManager()
}
}
FdEntity* FdManager::GetFdEntity(const char* path)
FdEntity* FdManager::GetFdEntity(const char* path, int existfd)
{
FPRNINFO("[path=%s]", SAFESTRPTR(path));
FPRNINFO("[path=%s][fd=%d]", SAFESTRPTR(path), existfd);
if(!path || '\0' == path[0]){
return NULL;
@ -1133,10 +1165,24 @@ FdEntity* FdManager::GetFdEntity(const char* path)
AutoLock auto_lock(&FdManager::fd_manager_lock);
fdent_map_t::iterator iter = fent.find(string(path));
if(fent.end() == iter){
return NULL;
if(fent.end() != iter && (-1 == existfd || (*iter).second->GetFd() == existfd)){
return (*iter).second;
}
return (*iter).second;
if(-1 != existfd){
for(iter = fent.begin(); iter != fent.end(); iter++){
if((*iter).second && (*iter).second->GetFd() == existfd){
// found opend fd in map
if(0 == strcmp((*iter).second->GetPath(), path)){
return (*iter).second;
}
// found fd, but it is used another file(file descriptor is recycled)
// so returns NULL.
break;
}
}
}
return NULL;
}
FdEntity* FdManager::Open(const char* path, off_t size, time_t time, bool force_tmpfile, bool is_create)
@ -1165,8 +1211,22 @@ FdEntity* FdManager::Open(const char* path, off_t size, time_t time, bool force_
}
// make new obj
ent = new FdEntity(path, cache_path.c_str());
fent[string(path)] = ent;
if(0 < cache_path.size()){
// using cache
fent[string(path)] = ent;
}else{
// not using cache, so the key of fdentity is set not really existsing path.
// (but not strictly unexisting path.)
//
// [NOTE]
// The reason why this process here, please look at the definition of the
// comments of NOCACHE_PATH_PREFIX_FORM symbol.
//
string tmppath("");
FdManager::MakeRandomTempPath(path, tmppath);
fent[tmppath] = ent;
}
}else{
return NULL;
}
@ -1178,6 +1238,50 @@ FdEntity* FdManager::Open(const char* path, off_t size, time_t time, bool force_
return ent;
}
FdEntity* FdManager::ExistOpen(const char* path, int existfd)
{
FPRNINFO("[path=%s][fd=%d]", SAFESTRPTR(path), existfd);
// search by real path
FdEntity* ent = Open(path, -1, -1, false, false);
if(!ent && -1 != existfd){
// search from all fdentity because of not using cache.
AutoLock auto_lock(&FdManager::fd_manager_lock);
for(fdent_map_t::iterator iter = fent.begin(); iter != fent.end(); iter++){
if((*iter).second && (*iter).second->GetFd() == existfd && (*iter).second->IsOpen()){
// found opend fd in map
if(0 == strcmp((*iter).second->GetPath(), path)){
ent = (*iter).second;
// open
if(-1 == ent->Open(-1, -1)){
return NULL;
}
}else{
// found fd, but it is used another file(file descriptor is recycled)
// so returns NULL.
}
break;
}
}
}
return ent;
}
void FdManager::Rename(const std::string &from, const std::string &to)
{
fdent_map_t::iterator iter = fent.find(from);
if(fent.end() != iter){
// found
FPRNINFO("[from=%s][to=%s]", from.c_str(), to.c_str());
FdEntity* ent = (*iter).second;
fent.erase(iter);
ent->SetPath(to);
fent[to] = ent;
}
}
bool FdManager::Close(FdEntity* ent)
{
FPRNINFO("[ent->file=%s][ent->fd=%d]", ent ? ent->GetPath() : "", ent ? ent->GetFd() : -1);
@ -1197,3 +1301,11 @@ bool FdManager::Close(FdEntity* ent)
return false;
}
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

View File

@ -1,3 +1,22 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef FD_CACHE_H_
#define FD_CACHE_H_
@ -66,7 +85,7 @@ class PageList
bool IsInit(off_t start, off_t size);
bool SetInit(off_t start, off_t size, bool is_init = true);
bool FindUninitPage(off_t start, off_t& resstart, size_t& ressize);
int GetUninitPages(fdpage_list_t& uninit_list, off_t start = 0);
int GetUninitPages(fdpage_list_t& uninit_list, off_t start = 0, off_t size = -1);
bool Serialize(CacheFileStat& file, bool is_output);
void Dump(void);
};
@ -83,7 +102,7 @@ class FdEntity
int refcnt; // reference count
std::string path; // object path
std::string cachepath; // local cache file path
int fd; // file discriptor(tmp file or cache file)
int fd; // file descriptor(tmp file or cache file)
FILE* file; // file pointer(tmp file or cache file)
bool is_modify; // if file is changed, this flag is true
@ -100,6 +119,7 @@ class FdEntity
bool IsOpen(void) const { return (-1 != fd); }
int Open(off_t size = -1, time_t time = -1);
const char* GetPath(void) const { return path.c_str(); }
void SetPath(const std::string &newpath) { path = newpath; }
int GetFd(void) const { return fd; }
int SetMtime(time_t time);
bool GetSize(off_t& size);
@ -110,8 +130,8 @@ class FdEntity
bool SetAllDisable(void) { return SetAllStatus(false); }
bool LoadFull(off_t* size = NULL, bool force_load = false);
int Load(off_t start, off_t size);
int RowFlush(const char* tpath, headers_t& meta, bool ow_sse_flg, bool force_sync = false);
int Flush(headers_t& meta, bool ow_sse_flg, bool force_sync = false) { return RowFlush(NULL, meta, ow_sse_flg, force_sync); }
int RowFlush(const char* tpath, headers_t& meta, bool force_sync = false);
int Flush(headers_t& meta, bool force_sync = false) { return RowFlush(NULL, meta, force_sync); }
ssize_t Read(char* bytes, off_t start, size_t size, bool force_load = false);
ssize_t Write(const char* bytes, off_t start, size_t size);
};
@ -146,11 +166,22 @@ class FdManager
static size_t SetPageSize(size_t size);
static size_t GetPageSize(void) { return FdManager::page_size; }
static bool MakeCachePath(const char* path, std::string& cache_path, bool is_create_dir = true);
static bool MakeRandomTempPath(const char* path, std::string& tmppath);
FdEntity* GetFdEntity(const char* path);
FdEntity* GetFdEntity(const char* path, int existfd = -1);
FdEntity* Open(const char* path, off_t size = -1, time_t time = -1, bool force_tmpfile = false, bool is_create = true);
FdEntity* ExistOpen(const char* path) { return Open(path, -1, -1, false, false); }
FdEntity* ExistOpen(const char* path, int existfd = -1);
void Rename(const std::string &from, const std::string &to);
bool Close(FdEntity* ent);
};
#endif // FD_CACHE_H_
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

453
src/gnutls_auth.cpp Normal file
View File

@ -0,0 +1,453 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <pthread.h>
#include <unistd.h>
#include <syslog.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <string.h>
#include <gcrypt.h>
#include <gnutls/gnutls.h>
#include <gnutls/crypto.h>
#ifdef USE_GNUTLS_NETTLE
#include <nettle/md5.h>
#include <nettle/sha1.h>
#include <nettle/hmac.h>
#endif
#include <string>
#include <map>
#include "common.h"
#include "s3fs_auth.h"
using namespace std;
//-------------------------------------------------------------------
// Utility Function for version
//-------------------------------------------------------------------
#ifdef USE_GNUTLS_NETTLE
const char* s3fs_crypt_lib_name(void)
{
static const char version[] = "GnuTLS(nettle)";
return version;
}
#else // USE_GNUTLS_NETTLE
const char* s3fs_crypt_lib_name(void)
{
static const char version[] = "GnuTLS(gcrypt)";
return version;
}
#endif // USE_GNUTLS_NETTLE
//-------------------------------------------------------------------
// Utility Function for global init
//-------------------------------------------------------------------
bool s3fs_init_global_ssl(void)
{
if(GNUTLS_E_SUCCESS != gnutls_global_init()){
return false;
}
return true;
}
bool s3fs_destroy_global_ssl(void)
{
gnutls_global_deinit();
return true;
}
//-------------------------------------------------------------------
// Utility Function for crypt lock
//-------------------------------------------------------------------
bool s3fs_init_crypt_mutex(void)
{
return true;
}
bool s3fs_destroy_crypt_mutex(void)
{
return true;
}
//-------------------------------------------------------------------
// Utility Function for HMAC
//-------------------------------------------------------------------
#ifdef USE_GNUTLS_NETTLE
bool s3fs_HMAC(const void* key, size_t keylen, const unsigned char* data, size_t datalen, unsigned char** digest, unsigned int* digestlen)
{
if(!key || 0 >= keylen || !data || 0 >= datalen || !digest || !digestlen){
return false;
}
if(NULL == (*digest = (unsigned char*)malloc(SHA1_DIGEST_SIZE))){
return false;
}
struct hmac_sha1_ctx ctx_hmac;
hmac_sha1_set_key(&ctx_hmac, keylen, reinterpret_cast<const uint8_t*>(key));
hmac_sha1_update(&ctx_hmac, datalen, reinterpret_cast<const uint8_t*>(data));
hmac_sha1_digest(&ctx_hmac, SHA1_DIGEST_SIZE, reinterpret_cast<uint8_t*>(*digest));
*digestlen = SHA1_DIGEST_SIZE;
return true;
}
bool s3fs_HMAC256(const void* key, size_t keylen, const unsigned char* data, size_t datalen, unsigned char** digest, unsigned int* digestlen)
{
if(!key || 0 >= keylen || !data || 0 >= datalen || !digest || !digestlen){
return false;
}
if(NULL == (*digest = (unsigned char*)malloc(SHA256_DIGEST_SIZE))){
return false;
}
struct hmac_sha256_ctx ctx_hmac;
hmac_sha256_set_key(&ctx_hmac, keylen, reinterpret_cast<const uint8_t*>(key));
hmac_sha256_update(&ctx_hmac, datalen, reinterpret_cast<const uint8_t*>(data));
hmac_sha256_digest(&ctx_hmac, SHA256_DIGEST_SIZE, reinterpret_cast<uint8_t*>(*digest));
*digestlen = SHA256_DIGEST_SIZE;
return true;
}
#else // USE_GNUTLS_NETTLE
bool s3fs_HMAC(const void* key, size_t keylen, const unsigned char* data, size_t datalen, unsigned char** digest, unsigned int* digestlen)
{
if(!key || 0 >= keylen || !data || 0 >= datalen || !digest || !digestlen){
return false;
}
if(0 >= (*digestlen = gnutls_hmac_get_len(GNUTLS_MAC_SHA1))){
return false;
}
if(NULL == (*digest = (unsigned char*)malloc(*digestlen + 1))){
return false;
}
if(0 > gnutls_hmac_fast(GNUTLS_MAC_SHA1, key, keylen, data, datalen, *digest)){
free(*digest);
*digest = NULL;
return false;
}
return true;
}
bool s3fs_HMAC256(const void* key, size_t keylen, const unsigned char* data, size_t datalen, unsigned char** digest, unsigned int* digestlen)
{
if(!key || 0 >= keylen || !data || 0 >= datalen || !digest || !digestlen){
return false;
}
if(0 >= (*digestlen = gnutls_hmac_get_len(GNUTLS_MAC_SHA256))){
return false;
}
if(NULL == (*digest = (unsigned char*)malloc(*digestlen + 1))){
return false;
}
if(0 > gnutls_hmac_fast(GNUTLS_MAC_SHA256, key, keylen, data, datalen, *digest)){
free(*digest);
*digest = NULL;
return false;
}
return true;
}
#endif // USE_GNUTLS_NETTLE
//-------------------------------------------------------------------
// Utility Function for MD5
//-------------------------------------------------------------------
#define MD5_DIGEST_LENGTH 16
size_t get_md5_digest_length(void)
{
return MD5_DIGEST_LENGTH;
}
#ifdef USE_GNUTLS_NETTLE
unsigned char* s3fs_md5hexsum(int fd, off_t start, ssize_t size)
{
struct md5_ctx ctx_md5;
unsigned char buf[512];
ssize_t bytes;
unsigned char* result;
// seek to top of file.
if(-1 == lseek(fd, start, SEEK_SET)){
return NULL;
}
memset(buf, 0, 512);
md5_init(&ctx_md5);
for(ssize_t total = 0; total < size; total += bytes){
bytes = 512 < (size - total) ? 512 : (size - total);
bytes = read(fd, buf, bytes);
if(0 == bytes){
// end of file
break;
}else if(-1 == bytes){
// error
DPRNNN("file read error(%d)", errno);
return NULL;
}
md5_update(&ctx_md5, bytes, buf);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_md5_digest_length()))){
return NULL;
}
md5_digest(&ctx_md5, get_md5_digest_length(), result);
if(-1 == lseek(fd, start, SEEK_SET)){
free(result);
return NULL;
}
return result;
}
#else // USE_GNUTLS_NETTLE
unsigned char* s3fs_md5hexsum(int fd, off_t start, ssize_t size)
{
gcry_md_hd_t ctx_md5;
gcry_error_t err;
char buf[512];
ssize_t bytes;
unsigned char* result;
if(-1 == size){
struct stat st;
if(-1 == fstat(fd, &st)){
return NULL;
}
size = static_cast<ssize_t>(st.st_size);
}
// seek to top of file.
if(-1 == lseek(fd, start, SEEK_SET)){
return NULL;
}
memset(buf, 0, 512);
if(GPG_ERR_NO_ERROR != (err = gcry_md_open(&ctx_md5, GCRY_MD_MD5, 0))){
DPRNN("MD5 context creation failure: %s/%s", gcry_strsource(err), gcry_strerror(err));
return NULL;
}
for(ssize_t total = 0; total < size; total += bytes){
bytes = 512 < (size - total) ? 512 : (size - total);
bytes = read(fd, buf, bytes);
if(0 == bytes){
// end of file
break;
}else if(-1 == bytes){
// error
DPRNNN("file read error(%d)", errno);
return NULL;
}
gcry_md_write(ctx_md5, buf, bytes);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_md5_digest_length()))){
return NULL;
}
memcpy(result, gcry_md_read(ctx_md5, 0), get_md5_digest_length());
gcry_md_close(ctx_md5);
if(-1 == lseek(fd, start, SEEK_SET)){
free(result);
return NULL;
}
return result;
}
#endif // USE_GNUTLS_NETTLE
//-------------------------------------------------------------------
// Utility Function for SHA256
//-------------------------------------------------------------------
#define SHA256_DIGEST_LENGTH 32
size_t get_sha256_digest_length(void)
{
return SHA256_DIGEST_LENGTH;
}
#ifdef USE_GNUTLS_NETTLE
bool s3fs_sha256(const unsigned char* data, unsigned int datalen, unsigned char** digest, unsigned int* digestlen)
{
(*digestlen) = static_cast<unsigned int>(get_sha256_digest_length());
if(NULL == ((*digest) = reinterpret_cast<unsigned char*>(malloc(*digestlen)))){
return false;
}
struct sha256_ctx ctx_sha256;
sha256_init(&ctx_sha256);
sha256_update(&ctx_sha256, datalen, data);
sha256_digest(&ctx_sha256, *digestlen, *digest);
return true;
}
unsigned char* s3fs_sha256hexsum(int fd, off_t start, ssize_t size)
{
struct sha256_ctx ctx_sha256;
unsigned char buf[512];
ssize_t bytes;
unsigned char* result;
// seek to top of file.
if(-1 == lseek(fd, start, SEEK_SET)){
return NULL;
}
memset(buf, 0, 512);
sha256_init(&ctx_sha256);
for(ssize_t total = 0; total < size; total += bytes){
bytes = 512 < (size - total) ? 512 : (size - total);
bytes = read(fd, buf, bytes);
if(0 == bytes){
// end of file
break;
}else if(-1 == bytes){
// error
DPRNNN("file read error(%d)", errno);
return NULL;
}
sha256_update(&ctx_sha256, bytes, buf);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_sha256_digest_length()))){
return NULL;
}
sha256_digest(&ctx_sha256, get_sha256_digest_length(), result);
if(-1 == lseek(fd, start, SEEK_SET)){
free(result);
return NULL;
}
return result;
}
#else // USE_GNUTLS_NETTLE
bool s3fs_sha256(const unsigned char* data, unsigned int datalen, unsigned char** digest, unsigned int* digestlen)
{
(*digestlen) = static_cast<unsigned int>(get_sha256_digest_length());
if(NULL == ((*digest) = reinterpret_cast<unsigned char*>(malloc(*digestlen)))){
return false;
}
gcry_md_hd_t ctx_sha256;
gcry_error_t err;
if(GPG_ERR_NO_ERROR != (err = gcry_md_open(&ctx_sha256, GCRY_MD_SHA256, 0))){
DPRNN("SHA256 context creation failure: %s/%s", gcry_strsource(err), gcry_strerror(err));
free(*digest);
return false;
}
gcry_md_write(ctx_sha256, data, datalen);
memcpy(*digest, gcry_md_read(ctx_sha256, 0), *digestlen);
gcry_md_close(ctx_sha256);
return true;
}
unsigned char* s3fs_sha256hexsum(int fd, off_t start, ssize_t size)
{
gcry_md_hd_t ctx_sha256;
gcry_error_t err;
char buf[512];
ssize_t bytes;
unsigned char* result;
if(-1 == size){
struct stat st;
if(-1 == fstat(fd, &st)){
return NULL;
}
size = static_cast<ssize_t>(st.st_size);
}
// seek to top of file.
if(-1 == lseek(fd, start, SEEK_SET)){
return NULL;
}
memset(buf, 0, 512);
if(GPG_ERR_NO_ERROR != (err = gcry_md_open(&ctx_sha256, GCRY_MD_SHA256, 0))){
DPRNN("SHA256 context creation failure: %s/%s", gcry_strsource(err), gcry_strerror(err));
return NULL;
}
for(ssize_t total = 0; total < size; total += bytes){
bytes = 512 < (size - total) ? 512 : (size - total);
bytes = read(fd, buf, bytes);
if(0 == bytes){
// end of file
break;
}else if(-1 == bytes){
// error
DPRNNN("file read error(%d)", errno);
return NULL;
}
gcry_md_write(ctx_sha256, buf, bytes);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_sha256_digest_length()))){
return NULL;
}
memcpy(result, gcry_md_read(ctx_sha256, 0), get_sha256_digest_length());
gcry_md_close(ctx_sha256);
if(-1 == lseek(fd, start, SEEK_SET)){
free(result);
return NULL;
}
return result;
}
#endif // USE_GNUTLS_NETTLE
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

293
src/nss_auth.cpp Normal file
View File

@ -0,0 +1,293 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <pthread.h>
#include <unistd.h>
#include <syslog.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <string.h>
#include <nss.h>
#include <pk11pub.h>
#include <hasht.h>
#include <prinit.h>
#include <string>
#include <map>
#include "common.h"
#include "s3fs_auth.h"
using namespace std;
//-------------------------------------------------------------------
// Utility Function for version
//-------------------------------------------------------------------
const char* s3fs_crypt_lib_name(void)
{
static const char version[] = "NSS";
return version;
}
//-------------------------------------------------------------------
// Utility Function for global init
//-------------------------------------------------------------------
bool s3fs_init_global_ssl(void)
{
NSS_Init(NULL);
NSS_NoDB_Init(NULL);
return true;
}
bool s3fs_destroy_global_ssl(void)
{
NSS_Shutdown();
PL_ArenaFinish();
PR_Cleanup();
return true;
}
//-------------------------------------------------------------------
// Utility Function for crypt lock
//-------------------------------------------------------------------
bool s3fs_init_crypt_mutex(void)
{
return true;
}
bool s3fs_destroy_crypt_mutex(void)
{
return true;
}
//-------------------------------------------------------------------
// Utility Function for HMAC
//-------------------------------------------------------------------
static bool s3fs_HMAC_RAW(const void* key, size_t keylen, const unsigned char* data, size_t datalen, unsigned char** digest, unsigned int* digestlen, bool is_sha256)
{
if(!key || 0 >= keylen || !data || 0 >= datalen || !digest || !digestlen){
return false;
}
PK11SlotInfo* Slot;
PK11SymKey* pKey;
PK11Context* Context;
SECStatus SecStatus;
unsigned char tmpdigest[64];
SECItem KeySecItem = {siBuffer, reinterpret_cast<unsigned char*>(const_cast<void*>(key)), static_cast<unsigned int>(keylen)};
SECItem NullSecItem = {siBuffer, NULL, 0};
if(NULL == (Slot = PK11_GetInternalKeySlot())){
return false;
}
if(NULL == (pKey = PK11_ImportSymKey(Slot, (is_sha256 ? CKM_SHA256_HMAC : CKM_SHA_1_HMAC), PK11_OriginUnwrap, CKA_SIGN, &KeySecItem, NULL))){
PK11_FreeSlot(Slot);
return false;
}
if(NULL == (Context = PK11_CreateContextBySymKey((is_sha256 ? CKM_SHA256_HMAC : CKM_SHA_1_HMAC), CKA_SIGN, pKey, &NullSecItem))){
PK11_FreeSymKey(pKey);
PK11_FreeSlot(Slot);
return false;
}
*digestlen = 0;
if(SECSuccess != (SecStatus = PK11_DigestBegin(Context)) ||
SECSuccess != (SecStatus = PK11_DigestOp(Context, data, datalen)) ||
SECSuccess != (SecStatus = PK11_DigestFinal(Context, tmpdigest, digestlen, sizeof(tmpdigest))) )
{
PK11_DestroyContext(Context, PR_TRUE);
PK11_FreeSymKey(pKey);
PK11_FreeSlot(Slot);
return false;
}
PK11_DestroyContext(Context, PR_TRUE);
PK11_FreeSymKey(pKey);
PK11_FreeSlot(Slot);
if(NULL == (*digest = (unsigned char*)malloc(*digestlen))){
return false;
}
memcpy(*digest, tmpdigest, *digestlen);
return true;
}
bool s3fs_HMAC(const void* key, size_t keylen, const unsigned char* data, size_t datalen, unsigned char** digest, unsigned int* digestlen)
{
return s3fs_HMAC_RAW(key, keylen, data, datalen, digest, digestlen, false);
}
bool s3fs_HMAC256(const void* key, size_t keylen, const unsigned char* data, size_t datalen, unsigned char** digest, unsigned int* digestlen)
{
return s3fs_HMAC_RAW(key, keylen, data, datalen, digest, digestlen, true);
}
//-------------------------------------------------------------------
// Utility Function for MD5
//-------------------------------------------------------------------
size_t get_md5_digest_length(void)
{
return MD5_LENGTH;
}
unsigned char* s3fs_md5hexsum(int fd, off_t start, ssize_t size)
{
PK11Context* md5ctx;
unsigned char buf[512];
ssize_t bytes;
unsigned char* result;
unsigned int md5outlen;
if(-1 == size){
struct stat st;
if(-1 == fstat(fd, &st)){
return NULL;
}
size = static_cast<ssize_t>(st.st_size);
}
// seek to top of file.
if(-1 == lseek(fd, start, SEEK_SET)){
return NULL;
}
memset(buf, 0, 512);
md5ctx = PK11_CreateDigestContext(SEC_OID_MD5);
for(ssize_t total = 0; total < size; total += bytes){
bytes = 512 < (size - total) ? 512 : (size - total);
bytes = read(fd, buf, bytes);
if(0 == bytes){
// end of file
break;
}else if(-1 == bytes){
// error
DPRNNN("file read error(%d)", errno);
return NULL;
}
PK11_DigestOp(md5ctx, buf, bytes);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_md5_digest_length()))){
PK11_DestroyContext(md5ctx, PR_TRUE);
return NULL;
}
PK11_DigestFinal(md5ctx, result, &md5outlen, get_md5_digest_length());
PK11_DestroyContext(md5ctx, PR_TRUE);
if(-1 == lseek(fd, start, SEEK_SET)){
free(result);
return NULL;
}
return result;
}
//-------------------------------------------------------------------
// Utility Function for SHA256
//-------------------------------------------------------------------
size_t get_sha256_digest_length(void)
{
return SHA256_LENGTH;
}
bool s3fs_sha256(const unsigned char* data, unsigned int datalen, unsigned char** digest, unsigned int* digestlen)
{
(*digestlen) = static_cast<unsigned int>(get_sha256_digest_length());
if(NULL == ((*digest) = reinterpret_cast<unsigned char*>(malloc(*digestlen)))){
return false;
}
PK11Context* sha256ctx;
unsigned int sha256outlen;
sha256ctx = PK11_CreateDigestContext(SEC_OID_SHA256);
PK11_DigestOp(sha256ctx, data, datalen);
PK11_DigestFinal(sha256ctx, *digest, &sha256outlen, *digestlen);
PK11_DestroyContext(sha256ctx, PR_TRUE);
*digestlen = sha256outlen;
return true;
}
unsigned char* s3fs_sha256hexsum(int fd, off_t start, ssize_t size)
{
PK11Context* sha256ctx;
unsigned char buf[512];
ssize_t bytes;
unsigned char* result;
unsigned int sha256outlen;
if(-1 == size){
struct stat st;
if(-1 == fstat(fd, &st)){
return NULL;
}
size = static_cast<ssize_t>(st.st_size);
}
// seek to top of file.
if(-1 == lseek(fd, start, SEEK_SET)){
return NULL;
}
memset(buf, 0, 512);
sha256ctx = PK11_CreateDigestContext(SEC_OID_SHA256);
for(ssize_t total = 0; total < size; total += bytes){
bytes = 512 < (size - total) ? 512 : (size - total);
bytes = read(fd, buf, bytes);
if(0 == bytes){
// end of file
break;
}else if(-1 == bytes){
// error
DPRNNN("file read error(%d)", errno);
return NULL;
}
PK11_DigestOp(sha256ctx, buf, bytes);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_sha256_digest_length()))){
PK11_DestroyContext(sha256ctx, PR_TRUE);
return NULL;
}
PK11_DigestFinal(sha256ctx, result, &sha256outlen, get_sha256_digest_length());
PK11_DestroyContext(sha256ctx, PR_TRUE);
if(-1 == lseek(fd, start, SEEK_SET)){
free(result);
return NULL;
}
return result;
}
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

356
src/openssl_auth.cpp Normal file
View File

@ -0,0 +1,356 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <pthread.h>
#include <unistd.h>
#include <syslog.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <string.h>
#include <openssl/bio.h>
#include <openssl/buffer.h>
#include <openssl/evp.h>
#include <openssl/hmac.h>
#include <openssl/md5.h>
#include <openssl/sha.h>
#include <openssl/crypto.h>
#include <openssl/err.h>
#include <string>
#include <map>
#include "common.h"
#include "s3fs_auth.h"
using namespace std;
//-------------------------------------------------------------------
// Utility Function for version
//-------------------------------------------------------------------
const char* s3fs_crypt_lib_name(void)
{
static const char version[] = "OpenSSL";
return version;
}
//-------------------------------------------------------------------
// Utility Function for global init
//-------------------------------------------------------------------
bool s3fs_init_global_ssl(void)
{
ERR_load_crypto_strings();
ERR_load_BIO_strings();
OpenSSL_add_all_algorithms();
return true;
}
bool s3fs_destroy_global_ssl(void)
{
EVP_cleanup();
ERR_free_strings();
return true;
}
//-------------------------------------------------------------------
// Utility Function for crypt lock
//-------------------------------------------------------------------
// internal use struct for openssl
struct CRYPTO_dynlock_value
{
pthread_mutex_t dyn_mutex;
};
static pthread_mutex_t* s3fs_crypt_mutex = NULL;
static void s3fs_crypt_mutex_lock(int mode, int pos, const char* file, int line)
{
if(s3fs_crypt_mutex){
if(mode & CRYPTO_LOCK){
pthread_mutex_lock(&s3fs_crypt_mutex[pos]);
}else{
pthread_mutex_unlock(&s3fs_crypt_mutex[pos]);
}
}
}
static unsigned long s3fs_crypt_get_threadid(void)
{
// For FreeBSD etc, some system's pthread_t is structure pointer.
// Then we use cast like C style(not C++) instead of ifdef.
return (unsigned long)(pthread_self());
}
static struct CRYPTO_dynlock_value* s3fs_dyn_crypt_mutex(const char* file, int line)
{
struct CRYPTO_dynlock_value* dyndata;
if(NULL == (dyndata = static_cast<struct CRYPTO_dynlock_value*>(malloc(sizeof(struct CRYPTO_dynlock_value))))){
DPRNCRIT("Could not allocate memory for CRYPTO_dynlock_value");
return NULL;
}
pthread_mutex_init(&(dyndata->dyn_mutex), NULL);
return dyndata;
}
static void s3fs_dyn_crypt_mutex_lock(int mode, struct CRYPTO_dynlock_value* dyndata, const char* file, int line)
{
if(dyndata){
if(mode & CRYPTO_LOCK){
pthread_mutex_lock(&(dyndata->dyn_mutex));
}else{
pthread_mutex_unlock(&(dyndata->dyn_mutex));
}
}
}
static void s3fs_destroy_dyn_crypt_mutex(struct CRYPTO_dynlock_value* dyndata, const char* file, int line)
{
if(dyndata){
pthread_mutex_destroy(&(dyndata->dyn_mutex));
free(dyndata);
}
}
bool s3fs_init_crypt_mutex(void)
{
if(s3fs_crypt_mutex){
FPRNNN("s3fs_crypt_mutex is not NULL, destroy it.");
if(!s3fs_destroy_crypt_mutex()){
DPRN("Failed to s3fs_crypt_mutex");
return false;
}
}
if(NULL == (s3fs_crypt_mutex = static_cast<pthread_mutex_t*>(malloc(CRYPTO_num_locks() * sizeof(pthread_mutex_t))))){
DPRNCRIT("Could not allocate memory for s3fs_crypt_mutex");
return false;
}
for(int cnt = 0; cnt < CRYPTO_num_locks(); cnt++){
pthread_mutex_init(&s3fs_crypt_mutex[cnt], NULL);
}
// static lock
CRYPTO_set_locking_callback(s3fs_crypt_mutex_lock);
CRYPTO_set_id_callback(s3fs_crypt_get_threadid);
// dynamic lock
CRYPTO_set_dynlock_create_callback(s3fs_dyn_crypt_mutex);
CRYPTO_set_dynlock_lock_callback(s3fs_dyn_crypt_mutex_lock);
CRYPTO_set_dynlock_destroy_callback(s3fs_destroy_dyn_crypt_mutex);
return true;
}
bool s3fs_destroy_crypt_mutex(void)
{
if(!s3fs_crypt_mutex){
return true;
}
CRYPTO_set_dynlock_destroy_callback(NULL);
CRYPTO_set_dynlock_lock_callback(NULL);
CRYPTO_set_dynlock_create_callback(NULL);
CRYPTO_set_id_callback(NULL);
CRYPTO_set_locking_callback(NULL);
for(int cnt = 0; cnt < CRYPTO_num_locks(); cnt++){
pthread_mutex_destroy(&s3fs_crypt_mutex[cnt]);
}
CRYPTO_cleanup_all_ex_data();
free(s3fs_crypt_mutex);
s3fs_crypt_mutex = NULL;
return true;
}
//-------------------------------------------------------------------
// Utility Function for HMAC
//-------------------------------------------------------------------
static bool s3fs_HMAC_RAW(const void* key, size_t keylen, const unsigned char* data, size_t datalen, unsigned char** digest, unsigned int* digestlen, bool is_sha256)
{
if(!key || 0 >= keylen || !data || 0 >= datalen || !digest || !digestlen){
return false;
}
(*digestlen) = EVP_MAX_MD_SIZE * sizeof(unsigned char);
if(NULL == ((*digest) = (unsigned char*)malloc(*digestlen))){
return false;
}
if(is_sha256){
HMAC(EVP_sha256(), key, keylen, data, datalen, *digest, digestlen);
}else{
HMAC(EVP_sha1(), key, keylen, data, datalen, *digest, digestlen);
}
return true;
}
bool s3fs_HMAC(const void* key, size_t keylen, const unsigned char* data, size_t datalen, unsigned char** digest, unsigned int* digestlen)
{
return s3fs_HMAC_RAW(key, keylen, data, datalen, digest, digestlen, false);
}
bool s3fs_HMAC256(const void* key, size_t keylen, const unsigned char* data, size_t datalen, unsigned char** digest, unsigned int* digestlen)
{
return s3fs_HMAC_RAW(key, keylen, data, datalen, digest, digestlen, true);
}
//-------------------------------------------------------------------
// Utility Function for MD5
//-------------------------------------------------------------------
size_t get_md5_digest_length(void)
{
return MD5_DIGEST_LENGTH;
}
unsigned char* s3fs_md5hexsum(int fd, off_t start, ssize_t size)
{
MD5_CTX md5ctx;
char buf[512];
ssize_t bytes;
unsigned char* result;
if(-1 == size){
struct stat st;
if(-1 == fstat(fd, &st)){
return NULL;
}
size = static_cast<ssize_t>(st.st_size);
}
// seek to top of file.
if(-1 == lseek(fd, start, SEEK_SET)){
return NULL;
}
memset(buf, 0, 512);
MD5_Init(&md5ctx);
for(ssize_t total = 0; total < size; total += bytes){
bytes = 512 < (size - total) ? 512 : (size - total);
bytes = read(fd, buf, bytes);
if(0 == bytes){
// end of file
break;
}else if(-1 == bytes){
// error
DPRNNN("file read error(%d)", errno);
return NULL;
}
MD5_Update(&md5ctx, buf, bytes);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_md5_digest_length()))){
return NULL;
}
MD5_Final(result, &md5ctx);
if(-1 == lseek(fd, start, SEEK_SET)){
free(result);
return NULL;
}
return result;
}
//-------------------------------------------------------------------
// Utility Function for SHA256
//-------------------------------------------------------------------
size_t get_sha256_digest_length(void)
{
return SHA256_DIGEST_LENGTH;
}
bool s3fs_sha256(const unsigned char* data, unsigned int datalen, unsigned char** digest, unsigned int* digestlen)
{
(*digestlen) = EVP_MAX_MD_SIZE * sizeof(unsigned char);
if(NULL == ((*digest) = reinterpret_cast<unsigned char*>(malloc(*digestlen)))){
return false;
}
const EVP_MD* md = EVP_get_digestbyname("sha256");
EVP_MD_CTX* mdctx = EVP_MD_CTX_create();
EVP_DigestInit_ex(mdctx, md, NULL);
EVP_DigestUpdate(mdctx, data, datalen);
EVP_DigestFinal_ex(mdctx, *digest, digestlen);
EVP_MD_CTX_destroy(mdctx);
return true;
}
unsigned char* s3fs_sha256hexsum(int fd, off_t start, ssize_t size)
{
const EVP_MD* md = EVP_get_digestbyname("sha256");
EVP_MD_CTX* sha256ctx = EVP_MD_CTX_create();
EVP_DigestInit_ex(sha256ctx, md, NULL);
char buf[512];
ssize_t bytes;
unsigned char* result;
if(-1 == size){
struct stat st;
if(-1 == fstat(fd, &st)){
return NULL;
}
size = static_cast<ssize_t>(st.st_size);
}
// seek to top of file.
if(-1 == lseek(fd, start, SEEK_SET)){
return NULL;
}
memset(buf, 0, 512);
for(ssize_t total = 0; total < size; total += bytes){
bytes = 512 < (size - total) ? 512 : (size - total);
bytes = read(fd, buf, bytes);
if(0 == bytes){
// end of file
break;
}else if(-1 == bytes){
// error
DPRNNN("file read error(%d)", errno);
return NULL;
}
EVP_DigestUpdate(sha256ctx, buf, bytes);
memset(buf, 0, 512);
}
if(NULL == (result = (unsigned char*)malloc(get_sha256_digest_length()))){
return NULL;
}
EVP_DigestFinal_ex(sha256ctx, result, NULL);
EVP_MD_CTX_destroy(sha256ctx);
if(-1 == lseek(fd, start, SEEK_SET)){
free(result);
return NULL;
}
return result;
}
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,22 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef S3FS_S3_H_
#define S3FS_S3_H_
@ -65,36 +84,15 @@
#endif // HAVE_MALLOC_TRIM
//
// For initializing libcurl with NSS
// Normally libcurl initializes the NSS library, but usually allows
// you to initialize s3fs forcibly. Because Memory leak is reported
// in valgrind(about curl_global_init() function), and this is for
// the cancellation. When "--enable-nss-init" option is specified
// at configurarion, it makes NSS_INIT_ENABLED flag into Makefile.
// NOTICE
// This defines and macros is temporary, and this should be deleted.
//
#ifdef NSS_INIT_ENABLED
#include <nss.h>
#include <prinit.h>
#define S3FS_INIT_NSS() \
{ \
NSS_NoDB_Init(NULL); \
}
#define S3FS_CLEANUP_NSS() \
{ \
NSS_Shutdown(); \
PL_ArenaFinish(); \
PR_Cleanup(); \
}
#else // NSS_INIT_ENABLED
#define S3FS_INIT_NSS()
#define S3FS_CLEANUP_NSS()
#endif // NSS_INIT_ENABLED
char* get_object_sseckey_md5(const char* path);
#endif // S3FS_S3_H_
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

60
src/s3fs_auth.h Normal file
View File

@ -0,0 +1,60 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef S3FS_AUTH_H_
#define S3FS_AUTH_H_
//-------------------------------------------------------------------
// Utility functions for Authentication
//-------------------------------------------------------------------
//
// in common_auth.cpp
//
char* s3fs_base64(const unsigned char* input, size_t length);
unsigned char* s3fs_decode64(const char* input, size_t* plength);
std::string s3fs_get_content_md5(int fd);
std::string s3fs_md5sum(int fd, off_t start, ssize_t size);
std::string s3fs_sha256sum(int fd, off_t start, ssize_t size);
//
// in xxxxxx_auth.cpp
//
const char* s3fs_crypt_lib_name(void);
bool s3fs_init_global_ssl(void);
bool s3fs_destroy_global_ssl(void);
bool s3fs_init_crypt_mutex(void);
bool s3fs_destroy_crypt_mutex(void);
bool s3fs_HMAC(const void* key, size_t keylen, const unsigned char* data, size_t datalen, unsigned char** digest, unsigned int* digestlen);
bool s3fs_HMAC256(const void* key, size_t keylen, const unsigned char* data, size_t datalen, unsigned char** digest, unsigned int* digestlen);
size_t get_md5_digest_length(void);
unsigned char* s3fs_md5hexsum(int fd, off_t start, ssize_t size);
bool s3fs_sha256(const unsigned char* data, unsigned int datalen, unsigned char** digest, unsigned int* digestlen);
size_t get_sha256_digest_length(void);
unsigned char* s3fs_sha256hexsum(int fd, off_t start, ssize_t size);
#endif // S3FS_AUTH_H_
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

View File

@ -41,6 +41,7 @@
#include "s3fs_util.h"
#include "string_util.h"
#include "s3fs.h"
#include "s3fs_auth.h"
using namespace std;
@ -228,6 +229,26 @@ bool S3ObjList::IsDir(const char* name) const
return ps3obj->is_dir;
}
bool S3ObjList::GetLastName(std::string& lastname) const
{
bool result = false;
lastname = "";
for(s3obj_t::const_iterator iter = objects.begin(); iter != objects.end(); iter++){
if((*iter).second.orgname.length()){
if(0 > strcmp(lastname.c_str(), (*iter).second.orgname.c_str())){
lastname = (*iter).second.orgname;
result = true;
}
}else{
if(0 > strcmp(lastname.c_str(), (*iter).second.normalname.c_str())){
lastname = (*iter).second.normalname;
result = true;
}
}
}
return result;
}
bool S3ObjList::GetNameList(s3obj_list_t& list, bool OnlyNormalized, bool CutSlash) const
{
s3obj_t::const_iterator iter;
@ -311,7 +332,7 @@ MVNODE *create_mvnode(const char *old_path, const char *new_path, bool is_dir, b
return NULL;
}
if(NULL == (p_new_path = strdup(new_path))){
if(NULL == (p_new_path = strdup(new_path))){
free(p);
free(p_old_path);
printf("create_mvnode: could not allocation memory for p_new_path\n");
@ -329,7 +350,7 @@ MVNODE *create_mvnode(const char *old_path, const char *new_path, bool is_dir, b
}
//
// Add sorted MVNODE data(Ascending order)
// Add sorted MVNODE data(Ascending order)
//
MVNODE *add_mvnode(MVNODE** head, MVNODE** tail, const char *old_path, const char *new_path, bool is_dir, bool normdir)
{
@ -465,19 +486,21 @@ string get_username(uid_t uid)
// make buffer
if(0 == maxlen){
if(0 > (maxlen = (size_t)sysconf(_SC_GETPW_R_SIZE_MAX))){
long res = sysconf(_SC_GETPW_R_SIZE_MAX);
if(0 > res){
DPRNNN("could not get max pw length.");
maxlen = 0;
return string("");
}
maxlen = res;
}
if(NULL == (pbuf = (char*)malloc(sizeof(char) * maxlen))){
DPRNCRIT("failed to allocate memory.");
return string("");
}
// get group infomation
// get group information
if(0 != (result = getpwuid_r(uid, &pwinfo, pbuf, maxlen, &ppwinfo))){
DPRNNN("could not get pw infomation.");
DPRNNN("could not get pw information.");
free(pbuf);
return string("");
}
@ -501,19 +524,21 @@ int is_uid_inculde_group(uid_t uid, gid_t gid)
// make buffer
if(0 == maxlen){
if(0 > (maxlen = (size_t)sysconf(_SC_GETGR_R_SIZE_MAX))){
long res = sysconf(_SC_GETGR_R_SIZE_MAX);
if(0 > res){
DPRNNN("could not get max name length.");
maxlen = 0;
return -ERANGE;
}
maxlen = res;
}
if(NULL == (pbuf = (char*)malloc(sizeof(char) * maxlen))){
DPRNCRIT("failed to allocate memory.");
return -ENOMEM;
}
// get group infomation
// get group information
if(0 != (result = getgrgid_r(gid, &ginfo, pbuf, maxlen, &pginfo))){
DPRNNN("could not get group infomation.");
DPRNNN("could not get group information.");
free(pbuf);
return -result;
}
@ -670,9 +695,9 @@ mode_t get_mode(headers_t& meta, const char* path, bool checkdir, bool forcedir)
}
}
// Checking the bitmask, if the last 3 bits are all zero then process as a regular
// file type (S_IFDIR or S_IFREG), otherwise return mode unmodified so that S_IFIFO,
// file type (S_IFDIR or S_IFREG), otherwise return mode unmodified so that S_IFIFO,
// S_IFSOCK, S_IFCHR, S_IFLNK and S_IFBLK devices can be processed properly by fuse.
if(!(mode & S_IFMT)){
if(!(mode & S_IFMT)){
if(!isS3sync){
if(checkdir){
if(forcedir){
@ -822,7 +847,7 @@ void show_usage (void)
void show_help (void)
{
show_usage();
printf(
printf(
"\n"
"Mount an Amazon S3 bucket as a file system.\n"
"\n"
@ -854,7 +879,24 @@ void show_help (void)
" - this option makes Amazon's Reduced Redundancy Storage enable.\n"
"\n"
" use_sse (default is disable)\n"
" - this option makes Amazon's Server Site Encryption enable.\n"
" - use Amazon's Server-Site Encryption or Server-Side Encryption\n"
" with Customer-Provided Encryption Keys.\n"
" this option can not be specified with use_rrs. specifying only \n"
" \"use_sse\" or \"use_sse=1\" enables Server-Side Encryption.\n"
" (use_sse=1 for old version)\n"
" specifying this option with file path which has some SSE-C\n"
" secret key enables Server-Side Encryption with Customer-Provided\n"
" Encryption Keys.(use_sse=file)\n"
" the file must be 600 permission. the file can have some lines,\n"
" each line is one SSE-C key. the first line in file is used as\n"
" Customer-Provided Encryption Keys for uploading and changing\n"
" headers etc.\n"
" if there are some keys after first line, those are used\n"
" downloading object which are encrypted by not first key.\n"
" so that, you can keep all SSE-C keys in file, that is SSE-C\n"
" key history.\n"
" if AWSSSECKEYS environment is set, you can set SSE-C key instead\n"
" of this option.\n"
"\n"
" public_bucket (default=\"\" which means disabled)\n"
" - anonymously mount a public bucket when set to 1\n"
@ -905,6 +947,9 @@ void show_help (void)
" You can specify this option for performance, s3fs memorizes \n"
" in stat cache that the object(file or directory) does not exist.\n"
"\n"
" no_check_certificate\n"
" - server certificate won't be checked against the available certificate authorities.\n"
"\n"
" nodnscache (disable dns cache)\n"
" - s3fs is always using dns cache, this option make dns cache disable.\n"
"\n"
@ -923,8 +968,11 @@ void show_help (void)
" at once. It is necessary to set this value depending on a CPU \n"
" and a network band.\n"
"\n"
" multipart_size (default=\"10\")\n"
" - part size, in MB, for each multipart request.\n"
"\n"
" fd_page_size (default=\"52428800\"(50MB))\n"
" - number of internal management page size for each file discriptor.\n"
" - number of internal management page size for each file descriptor.\n"
" For delayed reading and writing by s3fs, s3fs manages pages which \n"
" is separated from object. Each pages has a status that data is \n"
" already loaded(or not loaded yet).\n"
@ -934,6 +982,27 @@ void show_help (void)
" url (default=\"http://s3.amazonaws.com\")\n"
" - sets the url to use to access amazon s3\n"
"\n"
" endpoint (default=\"us-east-1\")\n"
" - sets the endpoint to use on signature version 4\n"
" If this option is not specified, s3fs uses \"us-east-1\" region as\n"
" the default. If the s3fs could not connect to the region specified\n"
" by this option, s3fs could not run. But if you do not specify this\n"
" option, and if you can not connect with the default region, s3fs\n"
" will retry to automatically connect to the other region. So s3fs\n"
" can know the correct region name, because s3fs can find it in an\n"
" error from the S3 server.\n"
"\n"
" sigv2 (default is signature version 4)\n"
" - sets signing AWS requests by sing Signature Version 2\n"
"\n"
" mp_umask (default is \"0000\")\n"
" - sets umask for the mount point directory.\n"
" If allow_other option is not set, s3fs allows access to the mount\n"
" point only to the owner. In the opposite case s3fs allows access\n"
" to all users as the default. But if you set the allow_other with\n"
" this option, you can control the permissions of the\n"
" mount point by this option like umask.\n"
"\n"
" nomultipart (disable multipart uploads)\n"
"\n"
" enable_content_md5 (default is disable)\n"
@ -943,8 +1012,8 @@ void show_help (void)
" - set the IAM Role that will supply the credentials from the \n"
" instance meta-data.\n"
"\n"
" noxmlns (disable registing xml name space)\n"
" disable registing xml name space for response of \n"
" noxmlns (disable registering xml name space)\n"
" disable registering xml name space for response of \n"
" ListBucketResult and ListVersionsResult etc. Default name \n"
" space is looked up from \"http://s3.amazonaws.com/doc/2006-03-01\".\n"
" This option should not be specified now, because s3fs looks up\n"
@ -964,7 +1033,12 @@ void show_help (void)
" option does not use copy-api for all command(ex. chmod, chown,\n"
" touch, mv, etc), but this option does not use copy-api for\n"
" only rename command(ex. mv). If this option is specified with\n"
" nocopapi, the s3fs ignores it.\n"
" nocopyapi, then s3fs ignores it.\n"
"\n"
" use_path_request_style (use legacy API calling style)\n"
" Enble compatibility with S3-like APIs which do not support\n"
" the virtual-host request style, by using the older path request\n"
" style.\n"
"\n"
"FUSE/mount Options:\n"
"\n"
@ -997,11 +1071,20 @@ void show_help (void)
void show_version(void)
{
printf(
"Amazon Simple Storage Service File System %s\n"
"Amazon Simple Storage Service File System V%s with %s\n"
"Copyright (C) 2010 Randy Rizun <rrizun@gmail.com>\n"
"License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>\n"
"This is free software: you are free to change and redistribute it.\n"
"There is NO WARRANTY, to the extent permitted by law.\n", VERSION );
"There is NO WARRANTY, to the extent permitted by law.\n",
VERSION, s3fs_crypt_lib_name());
return;
}
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

View File

@ -1,3 +1,22 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef S3FS_S3FS_UTIL_H_
#define S3FS_S3FS_UTIL_H_
@ -51,6 +70,7 @@ class S3ObjList
std::string GetETag(const char* name) const;
bool IsDir(const char* name) const;
bool GetNameList(s3obj_list_t& list, bool OnlyNormalized = true, bool CutSlash = true) const;
bool GetLastName(std::string& lastname) const;
static bool MakeHierarchizedList(s3obj_list_t& list, bool haveSlash);
};
@ -116,3 +136,12 @@ void show_help(void);
void show_version(void);
#endif // S3FS_S3FS_UTIL_H_
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

View File

@ -18,6 +18,7 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <syslog.h>
@ -123,26 +124,98 @@ string urlEncode(const string &s)
{
string result;
for (unsigned i = 0; i < s.length(); ++i) {
if (s[i] == '/') { // Note- special case for fuse paths...
result += s[i];
} else if (isalnum(s[i])) {
result += s[i];
} else if (s[i] == '.' || s[i] == '-' || s[i] == '*' || s[i] == '_') {
result += s[i];
} else if (s[i] == ' ') {
result += '%';
result += '2';
result += '0';
char c = s[i];
if (c == '/' // Note- special case for fuse paths...
|| c == '.'
|| c == '-'
|| c == '_'
|| c == '~'
|| (c >= 'a' && c <= 'z')
|| (c >= 'A' && c <= 'Z')
|| (c >= '0' && c <= '9')) {
result += c;
} else {
result += "%";
result += hexAlphabet[static_cast<unsigned char>(s[i]) / 16];
result += hexAlphabet[static_cast<unsigned char>(s[i]) % 16];
result += hexAlphabet[static_cast<unsigned char>(c) / 16];
result += hexAlphabet[static_cast<unsigned char>(c) % 16];
}
}
return result;
}
/**
* urlEncode a fuse path,
* taking into special consideration "/",
* otherwise regular urlEncode.
*/
string urlEncode2(const string &s)
{
string result;
for (unsigned i = 0; i < s.length(); ++i) {
char c = s[i];
if (c == '=' // Note- special case for fuse paths...
|| c == '&' // Note- special case for s3...
|| c == '%'
|| c == '.'
|| c == '-'
|| c == '_'
|| c == '~'
|| (c >= 'a' && c <= 'z')
|| (c >= 'A' && c <= 'Z')
|| (c >= '0' && c <= '9')) {
result += c;
} else {
result += "%";
result += hexAlphabet[static_cast<unsigned char>(c) / 16];
result += hexAlphabet[static_cast<unsigned char>(c) % 16];
}
}
return result;
}
string urlDecode(const string& s)
{
string result;
for(unsigned i = 0; i < s.length(); ++i){
if(s[i] != '%'){
result += s[i];
}else{
char ch = 0;
if(s.length() <= ++i){
break; // wrong format.
}
ch += ('0' <= s[i] && s[i] <= '9') ? (s[i] - '0') : ('A' <= s[i] && s[i] <= 'F') ? (s[i] - 'A' + 0x0a) : ('a' <= s[i] && s[i] <= 'f') ? (s[i] - 'a' + 0x0a) : 0x00;
if(s.length() <= ++i){
break; // wrong format.
}
ch *= 16;
ch += ('0' <= s[i] && s[i] <= '9') ? (s[i] - '0') : ('A' <= s[i] && s[i] <= 'F') ? (s[i] - 'A' + 0x0a) : ('a' <= s[i] && s[i] <= 'f') ? (s[i] - 'a' + 0x0a) : 0x00;
result += ch;
}
}
return result;
}
bool takeout_str_dquart(string& str)
{
size_t pos;
// '"' for start
if(string::npos != (pos = str.find_first_of("\""))){
str = str.substr(pos + 1);
// '"' for end
if(string::npos == (pos = str.find_last_of("\""))){
return false;
}
str = str.substr(0, pos);
if(string::npos != str.find_first_of("\"")){
return false;
}
}
return true;
}
//
// ex. target="http://......?keyword=value&..."
//
@ -169,38 +242,11 @@ bool get_keyword_value(string& target, const char* keyword, string& value)
return true;
}
string prepare_url(const char* url)
{
FPRNINFO("URL is %s", url);
string uri;
string host;
string path;
string url_str = str(url);
string token = str("/" + bucket);
int bucket_pos = url_str.find(token);
int bucket_length = token.size();
int uri_length = 7;
if(!strncasecmp(url_str.c_str(), "https://", 8)){
uri_length = 8;
}
uri = url_str.substr(0, uri_length);
host = bucket + "." + url_str.substr(uri_length, bucket_pos - uri_length).c_str();
path = url_str.substr((bucket_pos + bucket_length));
url_str = uri + host + path;
FPRNINFO("URL changed is %s", url_str.c_str());
return str(url_str);
}
/**
* Returns the current date
* in a format suitable for a HTTP request header.
*/
string get_date()
string get_date_rfc850()
{
char buf[100];
time_t t = time(NULL);
@ -208,3 +254,32 @@ string get_date()
return buf;
}
void get_date_sigv3(string& date, string& date8601)
{
time_t tm = time(NULL);
date = get_date_string(tm);
date8601 = get_date_iso8601(tm);
}
string get_date_string(time_t tm)
{
char buf[100];
strftime(buf, sizeof(buf), "%Y%m%d", gmtime(&tm));
return buf;
}
string get_date_iso8601(time_t tm)
{
char buf[100];
strftime(buf, sizeof(buf), "%Y%m%dT%H%M%SZ", gmtime(&tm));
return buf;
}
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

View File

@ -1,3 +1,22 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef S3FS_STRING_UTIL_H_
#define S3FS_STRING_UTIL_H_
@ -6,6 +25,7 @@
*/
#include <string.h>
#include <syslog.h>
#include <sys/types.h>
#include <string>
#include <sstream>
@ -26,9 +46,23 @@ std::string trim_right(const std::string &s, const std::string &t = SPACES);
std::string trim(const std::string &s, const std::string &t = SPACES);
std::string lower(std::string s);
std::string IntToStr(int);
std::string get_date();
std::string get_date_rfc850(void);
void get_date_sigv3(std::string& date, std::string& date8601);
std::string get_date_string(time_t tm);
std::string get_date_iso8601(time_t tm);
std::string urlEncode(const std::string &s);
std::string prepare_url(const char* url);
std::string urlEncode2(const std::string &s);
std::string urlDecode(const std::string& s);
bool takeout_str_dquart(std::string& str);
bool get_keyword_value(std::string& target, const char* keyword, std::string& value);
#endif // S3FS_STRING_UTIL_H_
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: noet sw=4 ts=4 fdm=marker
* vim<600: noet sw=4 ts=4
*/

44
src/test_string_util.cpp Normal file
View File

@ -0,0 +1,44 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2014 Andrew Gaul <andrew@gaul.org>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <string>
#include "string_util.h"
#include "test_util.h"
int main(int argc, char *argv[])
{
ASSERT_EQUALS(std::string("1234"), trim(" 1234 "));
ASSERT_EQUALS(std::string("1234"), trim("1234 "));
ASSERT_EQUALS(std::string("1234"), trim(" 1234"));
ASSERT_EQUALS(std::string("1234"), trim("1234"));
ASSERT_EQUALS(std::string("1234 "), trim_left(" 1234 "));
ASSERT_EQUALS(std::string("1234 "), trim_left("1234 "));
ASSERT_EQUALS(std::string("1234"), trim_left(" 1234"));
ASSERT_EQUALS(std::string("1234"), trim_left("1234"));
ASSERT_EQUALS(std::string(" 1234"), trim_right(" 1234 "));
ASSERT_EQUALS(std::string("1234"), trim_right("1234 "));
ASSERT_EQUALS(std::string(" 1234"), trim_right(" 1234"));
ASSERT_EQUALS(std::string("1234"), trim_right("1234"));
return 0;
}

34
src/test_util.h Normal file
View File

@ -0,0 +1,34 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright 2014 Andrew Gaul <andrew@gaul.org>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <cstdlib>
#include <iostream>
template <typename T> void assert_equals(const T &x, const T &y, const char *file, int line)
{
if (x != y) {
std::cerr << x << " != " << y << " at " << file << ":" << line << std::endl;
std::exit(1);
}
}
#define ASSERT_EQUALS(x, y) \
assert_equals((x), (y), __FILE__, __LINE__)

View File

@ -1,3 +1,22 @@
######################################################################
# s3fs - FUSE-based file system backed by Amazon S3
#
# Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
######################################################################
TESTS=small-integration-test.sh
EXTRA_DIST = \
@ -8,3 +27,9 @@ EXTRA_DIST = \
sample_delcache.sh \
sample_ahbe.conf
testdir = test
test_PROGRAMS=rename_before_close
rename_before_close_SOURCES = rename_before_close.c

View File

@ -2,13 +2,22 @@
S3FS=../src/s3fs
S3FS_CREDENTIALS_FILE=$(eval echo ~${SUDO_USER}/.passwd-s3fs)
S3FS_CREDENTIALS_FILE="passwd-s3fs"
TEST_BUCKET_1=${SUDO_USER}-s3fs-integration-test
TEST_BUCKET_MOUNT_POINT_1=/mnt/${TEST_BUCKET_1}
TEST_BUCKET_1="s3fs-integration-test"
TEST_BUCKET_MOUNT_POINT_1=${TEST_BUCKET_1}
if [ ! -f "$S3FS_CREDENTIALS_FILE" ]
then
echo "Missing credentials file: $S3FS_CREDENTIALS_FILE"
exit 1
fi
chmod 600 "$S3FS_CREDENTIALS_FILE"
S3PROXY_VERSION="1.4.0"
S3PROXY_BINARY="s3proxy-${S3PROXY_VERSION}"
if [ ! -e "${S3PROXY_BINARY}" ]; then
wget "https://github.com/andrewgaul/s3proxy/releases/download/s3proxy-${S3PROXY_VERSION}/s3proxy" \
-O "${S3PROXY_BINARY}"
chmod +x "${S3PROXY_BINARY}"
fi

344
test/integration-test-main.sh Executable file
View File

@ -0,0 +1,344 @@
#!/bin/bash
set -o xtrace
set -o errexit
COMMON=integration-test-common.sh
source $COMMON
# Configuration
TEST_TEXT="HELLO WORLD"
TEST_TEXT_FILE=test-s3fs.txt
TEST_DIR=testdir
ALT_TEST_TEXT_FILE=test-s3fs-ALT.txt
TEST_TEXT_FILE_LENGTH=15
BIG_FILE=big-file-s3fs.txt
BIG_FILE_LENGTH=$((25 * 1024 * 1024))
function mk_test_file {
if [ $# == 0 ]; then
TEXT=$TEST_TEXT
else
TEXT=$1
fi
echo $TEXT > $TEST_TEXT_FILE
if [ ! -e $TEST_TEXT_FILE ]
then
echo "Could not create file ${TEST_TEXT_FILE}, it does not exist"
exit 1
fi
}
function rm_test_file {
if [ $# == 0 ]; then
FILE=$TEST_TEXT_FILE
else
FILE=$1
fi
rm -f $FILE
if [ -e $FILE ]
then
echo "Could not cleanup file ${TEST_TEXT_FILE}"
exit 1
fi
}
function mk_test_dir {
mkdir ${TEST_DIR}
if [ ! -d ${TEST_DIR} ]; then
echo "Directory ${TEST_DIR} was not created"
exit 1
fi
}
function rm_test_dir {
rmdir ${TEST_DIR}
if [ -e $TEST_DIR ]; then
echo "Could not remove the test directory, it still exists: ${TEST_DIR}"
exit 1
fi
}
CUR_DIR=`pwd`
TEST_BUCKET_MOUNT_POINT_1=$1
if [ "$TEST_BUCKET_MOUNT_POINT_1" == "" ]; then
echo "Mountpoint missing"
exit 1
fi
cd $TEST_BUCKET_MOUNT_POINT_1
if [ -e $TEST_TEXT_FILE ]
then
rm -f $TEST_TEXT_FILE
fi
# Write a small test file
for x in `seq 1 $TEST_TEXT_FILE_LENGTH`
do
echo "echo ${TEST_TEXT} to ${TEST_TEXT_FILE}"
echo $TEST_TEXT >> $TEST_TEXT_FILE
done
# Verify contents of file
echo "Verifying length of test file"
FILE_LENGTH=`wc -l $TEST_TEXT_FILE | awk '{print $1}'`
if [ "$FILE_LENGTH" -ne "$TEST_TEXT_FILE_LENGTH" ]
then
echo "error: expected $TEST_TEXT_FILE_LENGTH , got $FILE_LENGTH"
exit 1
fi
rm_test_file
##########################################################
# Rename test (individual file)
##########################################################
echo "Testing mv file function ..."
# if the rename file exists, delete it
if [ -e $ALT_TEST_TEXT_FILE ]
then
rm $ALT_TEST_TEXT_FILE
fi
if [ -e $ALT_TEST_TEXT_FILE ]
then
echo "Could not delete file ${ALT_TEST_TEXT_FILE}, it still exists"
exit 1
fi
# create the test file again
mk_test_file
#rename the test file
mv $TEST_TEXT_FILE $ALT_TEST_TEXT_FILE
if [ ! -e $ALT_TEST_TEXT_FILE ]
then
echo "Could not move file"
exit 1
fi
# Check the contents of the alt file
ALT_TEXT_LENGTH=`echo $TEST_TEXT | wc -c | awk '{print $1}'`
ALT_FILE_LENGTH=`wc -c $ALT_TEST_TEXT_FILE | awk '{print $1}'`
if [ "$ALT_FILE_LENGTH" -ne "$ALT_TEXT_LENGTH" ]
then
echo "moved file length is not as expected expected: $ALT_TEXT_LENGTH got: $ALT_FILE_LENGTH"
exit 1
fi
# clean up
rm_test_file $ALT_TEST_TEXT_FILE
##########################################################
# Rename test (individual directory)
##########################################################
echo "Testing mv directory function ..."
if [ -e $TEST_DIR ]; then
echo "Unexpected, this file/directory exists: ${TEST_DIR}"
exit 1
fi
mk_test_dir
mv ${TEST_DIR} ${TEST_DIR}_rename
if [ ! -d "${TEST_DIR}_rename" ]; then
echo "Directory ${TEST_DIR} was not renamed"
exit 1
fi
rmdir ${TEST_DIR}_rename
if [ -e "${TEST_DIR}_rename" ]; then
echo "Could not remove the test directory, it still exists: ${TEST_DIR}_rename"
exit 1
fi
###################################################################
# test redirects > and >>
###################################################################
echo "Testing redirects ..."
mk_test_file ABCDEF
CONTENT=`cat $TEST_TEXT_FILE`
if [ ${CONTENT} != "ABCDEF" ]; then
echo "CONTENT read is unexpected, got ${CONTENT}, expected ABCDEF"
exit 1
fi
echo XYZ > $TEST_TEXT_FILE
CONTENT=`cat $TEST_TEXT_FILE`
if [ ${CONTENT} != "XYZ" ]; then
echo "CONTENT read is unexpected, got ${CONTENT}, expected XYZ"
exit 1
fi
echo 123456 >> $TEST_TEXT_FILE
LINE1=`sed -n '1,1p' $TEST_TEXT_FILE`
LINE2=`sed -n '2,2p' $TEST_TEXT_FILE`
if [ ${LINE1} != "XYZ" ]; then
echo "LINE1 was not as expected, got ${LINE1}, expected XYZ"
exit 1
fi
if [ ${LINE2} != "123456" ]; then
echo "LINE2 was not as expected, got ${LINE2}, expected 123456"
exit 1
fi
# clean up
rm_test_file
#####################################################################
# Simple directory test mkdir/rmdir
#####################################################################
echo "Testing creation/removal of a directory"
if [ -e $TEST_DIR ]; then
echo "Unexpected, this file/directory exists: ${TEST_DIR}"
exit 1
fi
mk_test_dir
rm_test_dir
##########################################################
# File permissions test (individual file)
##########################################################
echo "Testing chmod file function ..."
# create the test file again
mk_test_file
ORIGINAL_PERMISSIONS=$(stat --format=%a $TEST_TEXT_FILE)
chmod 777 $TEST_TEXT_FILE;
# if they're the same, we have a problem.
if [ $(stat --format=%a $TEST_TEXT_FILE) == $ORIGINAL_PERMISSIONS ]
then
echo "Could not modify $TEST_TEXT_FILE permissions"
exit 1
fi
# clean up
rm_test_file
##########################################################
# File permissions test (individual file)
##########################################################
echo "Testing chown file function ..."
# create the test file again
mk_test_file
ORIGINAL_PERMISSIONS=$(stat --format=%u:%g $TEST_TEXT_FILE)
chown 1000:1000 $TEST_TEXT_FILE;
# if they're the same, we have a problem.
if [ $(stat --format=%a $TEST_TEXT_FILE) == $ORIGINAL_PERMISSIONS ]
then
echo "Could not modify $TEST_TEXT_FILE ownership"
exit 1
fi
# clean up
rm_test_file
##########################################################
# Testing list
##########################################################
echo "Testing list"
mk_test_file
mk_test_dir
file_cnt=$(ls -1 | wc -l)
if [ $file_cnt != 2 ]; then
echo "Expected 2 file but got $file_cnt"
exit 1
fi
rm_test_file
rm_test_dir
##########################################################
# Testing rename before close
##########################################################
if false; then
echo "Testing rename before close ..."
$CUR_DIR/rename_before_close $TEST_TEXT_FILE
if [ $? != 0 ]; then
echo "rename before close failed"
exit 1
fi
# clean up
rm_test_file
fi
##########################################################
# Testing multi-part upload
##########################################################
echo "Testing multi-part upload ..."
dd if=/dev/urandom of="/tmp/${BIG_FILE}" bs=$BIG_FILE_LENGTH count=1
dd if="/tmp/${BIG_FILE}" of="${BIG_FILE}" bs=$BIG_FILE_LENGTH count=1
# Verify contents of file
echo "Comparing test file"
if ! cmp "/tmp/${BIG_FILE}" "${BIG_FILE}"
then
exit 1
fi
rm -f "/tmp/${BIG_FILE}"
rm -f "${BIG_FILE}"
##########################################################
# Testing special characters
##########################################################
echo "Testing special characters ..."
ls 'special' 2>&1 | grep -q 'No such file or directory'
ls 'special?' 2>&1 | grep -q 'No such file or directory'
ls 'special*' 2>&1 | grep -q 'No such file or directory'
ls 'special~' 2>&1 | grep -q 'No such file or directory'
ls 'specialµ' 2>&1 | grep -q 'No such file or directory'
##########################################################
# Testing extended attributes
##########################################################
rm -f $TEST_TEXT_FILE
touch $TEST_TEXT_FILE
# set value
setfattr -n key1 -v value1 $TEST_TEXT_FILE
getfattr -n key1 --only-values $TEST_TEXT_FILE | grep -q '^value1$'
# append value
setfattr -n key2 -v value2 $TEST_TEXT_FILE
getfattr -n key1 --only-values $TEST_TEXT_FILE | grep -q '^value1$'
getfattr -n key2 --only-values $TEST_TEXT_FILE | grep -q '^value2$'
# remove value
setfattr -x key1 $TEST_TEXT_FILE
! getfattr -n key1 --only-values $TEST_TEXT_FILE
getfattr -n key2 --only-values $TEST_TEXT_FILE | grep -q '^value2$'
#####################################################################
# Tests are finished
#####################################################################
# Unmount the bucket
cd $CUR_DIR
echo "All tests complete."

1
test/passwd-s3fs Normal file
View File

@ -0,0 +1 @@
local-identity:local-credential

View File

@ -0,0 +1,88 @@
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
static const char FILE_CONTENT[] = "XXXX";
#define PROG "rename_before_close"
static char *
filename_to_mkstemp_template(const char *file)
{
size_t len = strlen(file);
static const char suffix[] = ".XXXXXX";
size_t new_len = len + sizeof(suffix);
char *ret_str = calloc(1, new_len);
int ret = snprintf(ret_str, new_len, "%s%s", file, suffix);
assert(ret == new_len - 1);
assert(ret_str[new_len] == '\0');
return ret_str;
}
static off_t
get_file_size(const char *file)
{
struct stat ss;
printf(PROG ": stat(%s)\n", file);
int ret = lstat(file, &ss);
assert(ret == 0);
return ss.st_size;
}
static void
test_rename_before_close(const char *file)
{
char *template = filename_to_mkstemp_template(file);
printf(PROG ": mkstemp(%s)\n", template);
int fd = mkstemp(template);
assert(fd >= 0);
sleep(1);
printf(PROG ": write(%s)\n", template);
int ret = write(fd, FILE_CONTENT, sizeof(FILE_CONTENT));
assert(ret == sizeof(FILE_CONTENT));
sleep(1);
printf(PROG ": fsync(%s)\n", template);
ret = fsync(fd);
assert(ret == 0);
sleep(1);
assert(get_file_size(template) == sizeof(FILE_CONTENT));
sleep(1);
printf(PROG ": rename(%s, %s)\n", template, file);
ret = rename(template, file);
assert(ret == 0);
sleep(1);
printf(PROG ": close(%s)\n", file);
ret = close(fd);
assert(ret == 0);
sleep(1);
assert(get_file_size(file) == sizeof(FILE_CONTENT));
}
int
main(int argc, char *argv[])
{
setvbuf(stdout, NULL, _IONBF, 0);
if (argc < 2) {
printf("Usage: %s <file>", argv[0]);
return 1;
}
test_rename_before_close(argv[1]);
return 0;
}

8
test/s3proxy.conf Normal file
View File

@ -0,0 +1,8 @@
s3proxy.endpoint=http://127.0.0.1:8080
s3proxy.authorization=aws-v2
s3proxy.identity=local-identity
s3proxy.credential=local-credential
jclouds.provider=transient
jclouds.identity=remote-identity
jclouds.credential=remote-credential

View File

@ -1,284 +1,69 @@
#!/bin/bash -e
COMMON=integration-test-common.sh
source $COMMON
#!/bin/bash
set -o xtrace
set -o errexit
# Require root
REQUIRE_ROOT=require-root.sh
source $REQUIRE_ROOT
#source $REQUIRE_ROOT
source integration-test-common.sh
# Configuration
TEST_TEXT="HELLO WORLD"
TEST_TEXT_FILE=test-s3fs.txt
TEST_DIR=testdir
ALT_TEST_TEXT_FILE=test-s3fs-ALT.txt
TEST_TEXT_FILE_LENGTH=15
function retry {
set +o errexit
N=$1; shift;
status=0
for i in $(seq $N); do
$@
status=$?
if [ $status == 0 ]; then
break
fi
sleep 1
done
if [ $status != 0 ]; then
echo "timeout waiting for $@"
fi
set -o errexit
return $status
}
function exit_handler {
kill $S3PROXY_PID
retry 30 fusermount -u $TEST_BUCKET_MOUNT_POINT_1
}
trap exit_handler EXIT
stdbuf -oL -eL java -jar "$S3PROXY_BINARY" --properties s3proxy.conf | stdbuf -oL -eL sed -u "s/^/s3proxy: /" &
# wait for S3Proxy to start
for i in $(seq 30);
do
if exec 3<>"/dev/tcp/localhost/8080";
then
exec 3<&- # Close for read
exec 3>&- # Close for write
break
fi
sleep 1
done
S3PROXY_PID=$(netstat -lpnt | grep :8080 | awk '{ print $7 }' | sed -u 's|/java||')
# Mount the bucket
if [ ! -d $TEST_BUCKET_MOUNT_POINT_1 ]
then
mkdir -p $TEST_BUCKET_MOUNT_POINT_1
fi
$S3FS $TEST_BUCKET_1 $TEST_BUCKET_MOUNT_POINT_1 -o passwd_file=$S3FS_CREDENTIALS_FILE
CUR_DIR=`pwd`
cd $TEST_BUCKET_MOUNT_POINT_1
stdbuf -oL -eL $S3FS $TEST_BUCKET_1 $TEST_BUCKET_MOUNT_POINT_1 \
-o createbucket \
-o passwd_file=$S3FS_CREDENTIALS_FILE \
-o sigv2 \
-o url=http://127.0.0.1:8080 \
-o use_path_request_style -f -o f2 -d -d |& stdbuf -oL -eL sed -u "s/^/s3fs: /" &
if [ -e $TEST_TEXT_FILE ]
then
rm -f $TEST_TEXT_FILE
fi
retry 30 grep $TEST_BUCKET_MOUNT_POINT_1 /proc/mounts || exit 1
# Write a small test file
for x in `seq 1 $TEST_TEXT_FILE_LENGTH`
do
echo "echo ${TEST_TEXT} to ${TEST_TEXT_FILE}"
echo $TEST_TEXT >> $TEST_TEXT_FILE
done
# Verify contents of file
echo "Verifying length of test file"
FILE_LENGTH=`wc -l $TEST_TEXT_FILE | awk '{print $1}'`
if [ "$FILE_LENGTH" -ne "$TEST_TEXT_FILE_LENGTH" ]
then
echo "error: expected $TEST_TEXT_FILE_LENGTH , got $FILE_LENGTH"
exit 1
fi
# Delete the test file
rm $TEST_TEXT_FILE
if [ -e $TEST_TEXT_FILE ]
then
echo "Could not delete file, it still exists"
exit 1
fi
##########################################################
# Rename test (individual file)
##########################################################
echo "Testing mv file function ..."
# if the rename file exists, delete it
if [ -e $ALT_TEST_TEXT_FILE ]
then
rm $ALT_TEST_TEXT_FILE
fi
if [ -e $ALT_TEST_TEXT_FILE ]
then
echo "Could not delete file ${ALT_TEST_TEXT_FILE}, it still exists"
exit 1
fi
# create the test file again
echo $TEST_TEXT > $TEST_TEXT_FILE
if [ ! -e $TEST_TEXT_FILE ]
then
echo "Could not create file ${TEST_TEXT_FILE}, it does not exist"
exit 1
fi
#rename the test file
mv $TEST_TEXT_FILE $ALT_TEST_TEXT_FILE
if [ ! -e $ALT_TEST_TEXT_FILE ]
then
echo "Could not move file"
exit 1
fi
# Check the contents of the alt file
ALT_TEXT_LENGTH=`echo $TEST_TEXT | wc -c | awk '{print $1}'`
ALT_FILE_LENGTH=`wc -c $ALT_TEST_TEXT_FILE | awk '{print $1}'`
if [ "$ALT_FILE_LENGTH" -ne "$ALT_TEXT_LENGTH" ]
then
echo "moved file length is not as expected expected: $ALT_TEXT_LENGTH got: $ALT_FILE_LENGTH"
exit 1
fi
# clean up
rm $ALT_TEST_TEXT_FILE
if [ -e $ALT_TEST_TEXT_FILE ]
then
echo "Could not cleanup file ${ALT_TEST_TEXT_FILE}, it still exists"
exit 1
fi
##########################################################
# Rename test (individual directory)
##########################################################
echo "Testing mv directory function ..."
if [ -e $TEST_DIR ]; then
echo "Unexpected, this file/directory exists: ${TEST_DIR}"
exit 1
fi
mkdir ${TEST_DIR}
if [ ! -d ${TEST_DIR} ]; then
echo "Directory ${TEST_DIR} was not created"
exit 1
fi
mv ${TEST_DIR} ${TEST_DIR}_rename
if [ ! -d "${TEST_DIR}_rename" ]; then
echo "Directory ${TEST_DIR} was not renamed"
exit 1
fi
rmdir ${TEST_DIR}_rename
if [ -e "${TEST_DIR}_rename" ]; then
echo "Could not remove the test directory, it still exists: ${TEST_DIR}_rename"
exit 1
fi
###################################################################
# test redirects > and >>
###################################################################
echo "Testing redirects ..."
echo ABCDEF > $TEST_TEXT_FILE
if [ ! -e $TEST_TEXT_FILE ]
then
echo "Could not create file ${TEST_TEXT_FILE}, it does not exist"
exit 1
fi
CONTENT=`cat $TEST_TEXT_FILE`
if [ ${CONTENT} != "ABCDEF" ]; then
echo "CONTENT read is unexpected, got ${CONTENT}, expected ABCDEF"
exit 1
fi
echo XYZ > $TEST_TEXT_FILE
CONTENT=`cat $TEST_TEXT_FILE`
if [ ${CONTENT} != "XYZ" ]; then
echo "CONTENT read is unexpected, got ${CONTENT}, expected XYZ"
exit 1
fi
echo 123456 >> $TEST_TEXT_FILE
LINE1=`sed -n '1,1p' $TEST_TEXT_FILE`
LINE2=`sed -n '2,2p' $TEST_TEXT_FILE`
if [ ${LINE1} != "XYZ" ]; then
echo "LINE1 was not as expected, got ${LINE1}, expected XYZ"
exit 1
fi
if [ ${LINE2} != "123456" ]; then
echo "LINE2 was not as expected, got ${LINE2}, expected 123456"
exit 1
fi
# clean up
rm $TEST_TEXT_FILE
if [ -e $TEST_TEXT_FILE ]
then
echo "Could not cleanup file ${TEST_TEXT_FILE}, it still exists"
exit 1
fi
#####################################################################
# Simple directory test mkdir/rmdir
#####################################################################
echo "Testing creation/removal of a directory"
if [ -e $TEST_DIR ]; then
echo "Unexpected, this file/directory exists: ${TEST_DIR}"
exit 1
fi
mkdir ${TEST_DIR}
if [ ! -d ${TEST_DIR} ]; then
echo "Directory ${TEST_DIR} was not created"
exit 1
fi
rmdir ${TEST_DIR}
if [ -e $TEST_DIR ]; then
echo "Could not remove the test directory, it still exists: ${TEST_DIR}"
exit 1
fi
##########################################################
# File permissions test (individual file)
##########################################################
echo "Testing chmod file function ..."
# create the test file again
echo $TEST_TEXT > $TEST_TEXT_FILE
if [ ! -e $TEST_TEXT_FILE ]
then
echo "Could not create file ${TEST_TEXT_FILE}"
exit 1
fi
ORIGINAL_PERMISSIONS=$(stat --format=%a $TEST_TEXT_FILE)
chmod 777 $TEST_TEXT_FILE;
# if they're the same, we have a problem.
if [ $(stat --format=%a $TEST_TEXT_FILE) == $ORIGINAL_PERMISSIONS ]
then
echo "Could not modify $TEST_TEXT_FILE permissions"
exit 1
fi
# clean up
rm $TEST_TEXT_FILE
if [ -e $TEST_TEXT_FILE ]
then
echo "Could not cleanup file ${TEST_TEXT_FILE}"
exit 1
fi
##########################################################
# File permissions test (individual file)
##########################################################
echo "Testing chown file function ..."
# create the test file again
echo $TEST_TEXT > $TEST_TEXT_FILE
if [ ! -e $TEST_TEXT_FILE ]
then
echo "Could not create file ${TEST_TEXT_FILE}"
exit 1
fi
ORIGINAL_PERMISSIONS=$(stat --format=%u:%g $TEST_TEXT_FILE)
chown 1000:1000 $TEST_TEXT_FILE;
# if they're the same, we have a problem.
if [ $(stat --format=%a $TEST_TEXT_FILE) == $ORIGINAL_PERMISSIONS ]
then
echo "Could not modify $TEST_TEXT_FILE ownership"
exit 1
fi
# clean up
rm $TEST_TEXT_FILE
if [ -e $TEST_TEXT_FILE ]
then
echo "Could not cleanup file ${TEST_TEXT_FILE}"
exit 1
fi
#####################################################################
# Tests are finished
#####################################################################
# Unmount the bucket
cd $CUR_DIR
umount $TEST_BUCKET_MOUNT_POINT_1
./integration-test-main.sh $TEST_BUCKET_MOUNT_POINT_1
echo "All tests complete."