111 Commits
v1.90 ... v1.91

Author SHA1 Message Date
49d92c7022 Update ChangeLog and configure.ac for 1.91 (#1920)
Fixes #1876.
2022-03-08 07:48:15 +09:00
d842d45b2b Fixed a bug about truncation for shrinking file 2022-03-02 22:41:10 +09:00
684ced5a41 Changed handling the credential in S3fsCred more robust 2022-03-02 22:39:15 +09:00
afb0897553 Typos 2022-02-24 19:15:00 +09:00
8a5c4306f5 Preserve sub-second precision where possible (#1915) 2022-02-23 23:58:51 +09:00
01e24967b6 Add test for external object creation (#1900)
This test demonstrates the behavior before and after the stat cache
timeout when using noobj_cache.
2022-02-23 23:34:58 +09:00
08adffd2fe Fix typos (#1916) 2022-02-23 23:31:52 +09:00
0842c5718f Use more new file names for every test (#1902)
This makes the tests more robust.  Also fix filename to end in .txt.
2022-02-23 22:59:21 +09:00
5452e9cb10 Dynamically generate dates for man page file 2022-02-23 21:51:47 +09:00
232ff28cc7 Re-re-fix propagating the return code (#1903)
Previously the integration tests were exiting after the first failed
test instead of running all of them an reporting their statuses.
Follows on to dbf93c0152.
2022-02-23 14:27:29 +09:00
81ed2bd91e Propagate deferred exit status from main (#1912)
Previously s3fs always returned zero when the bucket did not mount.
Fixes #1911.
2022-02-23 10:09:12 +09:00
305d660e39 Use custom CA bundle instead of ignoring errors (#1910)
Fixes #1846.
2022-02-23 10:04:05 +09:00
a716c72d37 Update notsup_compat_dir in --help 2022-02-21 19:29:15 +09:00
302150b4f5 Filter mountpoints via mount -t (#1905)
This is portable between Linux and macOS.
2022-02-20 20:49:35 +09:00
3fa03d4e1e Pass explicit -p option to ps (#1904)
This ensures that a pid follows.
2022-02-20 20:40:29 +09:00
e014d6e646 Changed Rocky Linux 8 instead of CentOS 8 2022-02-20 19:29:34 +09:00
c2a49b7b1a Rephrase the description of notsup_compat_dir 2022-02-20 16:06:57 +09:00
265fa9e47a Add performance considerations section to man page 2022-02-20 16:06:57 +09:00
d31cbda7b6 Fixed a bug about checking credential 2022-02-19 23:22:15 +09:00
b64dc7749c Moved parameter analysis processing to S3fsCred class 2022-02-19 17:23:40 +09:00
b9e2be5c21 Fixed two typos in configure.ac 2022-02-19 17:22:43 +09:00
839a33de49 Fixed not to call Flush even if the file size is increased (#1887)
Changed s3fs_truncate function.
This change reduces the number of file uploads if the file size is changed.

On macOS, I have found that the truncate call when "size=0" cannot reflect the file size.(This reason is not understood...)
To avoid this, only when "size=0", the flush method is called as before.

Other than that, I found a bug in FdEntity::Open() and fixed it.

Fixes #1875.
2022-02-15 21:29:07 +09:00
4dfe2bfdd7 Include climits to support musl libc
PATH_MAX constant is not visible from any of currently included header
files in system with musl libc, where compilation fails with an error
below. The constant is defined in limits.h which is directly include via
climits header file.

fdcache.cpp: In static member function 'static FILE* FdManager::MakeTempFile()':
fdcache.cpp:381:14: error: 'PATH_MAX' was not declared in this scope
  381 |     char cfn[PATH_MAX];
      |              ^~~~~~~~

Fixes: d67b83e671 ("Allow configuration for temporary files directory")
2022-02-14 09:19:30 +09:00
1678803566 Added S3fsCred class and moved Credential related processing in it 2022-02-13 21:38:30 +09:00
d7e929e0a8 Fixed some Github Actions errors. (#1886)
- Fix knownConditionTrueFalse cppcheck(2.7) error on MacOS
- Fixed package installing failure of appstream download on centos8
2022-02-13 14:23:35 +09:00
94e8e23eef Fixed test_external_directory_creation test when cache enabled (#1885) 2022-02-13 13:32:19 +09:00
dbf93c0152 Propagate return code properly (#1884)
Previously this did not propagate test failures.  A bad rebase
introduced this logic in 495d51113c.
2022-02-06 22:45:20 +09:00
9224f792f0 Use CLOCK_REALTIME for UTIME_NOW (#1881)
Previously s3fs_utimens used CLOCK_MONOTONIC_COARSE which was not
1970-based.  Found via pjdfstest.  References #1589.
2022-01-30 22:19:15 +09:00
f6ed972926 Always flush open files with O_CREAT flag (#1879)
Previously s3fs only created files that had dirty data and not those
with zero-bytes.  Regression from
771bbfeac5.  References #1013.  Found
via pjdfstest.  References #1589.
2022-01-30 22:02:37 +09:00
0c75a63184 Preserve sub-second precision with utimens (#1880)
Found via pjdfstest.  References #1589.
2022-01-30 21:45:51 +09:00
30cf7a50bb Added stat check for subdir created with awscli 2022-01-30 18:31:36 +09:00
e452ef3940 Fixed the fault tolerance when time stamp getting fails 2022-01-30 18:31:36 +09:00
cd5a69b9eb Handle UTIME_NOW and UTIME_OMIT special values (#1868)
FUSE 3 will require this behavior.  References #1159.
2022-01-29 11:35:37 +09:00
74c11ef226 Check bucket before trying to create it (#1874)
This makes it easier to run tests against S3 other than S3Proxy.
2022-01-26 23:42:12 +09:00
662882d2f0 Always call clock_gettime(2) (#1871)
e01ded9e27 introduced this compatibility
shim but macOS 10.12 (2016) added this:
https://stackoverflow.com/a/39801564 .  Also remove fallback to
time(3) which loses precision.
2022-01-25 08:36:27 +09:00
de0c87c801 Convert S3FS_LOW_LOGPRN from a macro to a function (#1869)
This shrinks the binary size from 770 to 540 KB and reduces compile
times.
2022-01-23 23:10:09 +09:00
41aaa4184f Avoid double setting values in statfs 2022-01-23 21:49:51 +09:00
451602e58d Remove unnecessary conditional for automake 2022-01-23 21:49:51 +09:00
581f5c0356 Move strptime polyfill to string_util 2022-01-23 21:49:51 +09:00
e5f6f112db Fix typo 2022-01-23 21:49:51 +09:00
b3cef944b2 Fix test_page_list_SOURCES has no if MSYS clause 2022-01-23 21:49:51 +09:00
6edb6067f3 Remove strcasestr polyfill 2022-01-23 21:49:51 +09:00
b2c659c0a6 Disable compiling polyfills in not MSYS2 env 2022-01-23 21:49:51 +09:00
807ea52ba7 Remove duplicates in .gitignore 2022-01-23 21:49:51 +09:00
3ac9f571f5 Use std::get_time instead in strptime polyfill 2022-01-23 21:49:51 +09:00
d95a612548 Revert "Run autoupdate"
This reverts commit 0b1d801598164c45e7c9e89ebd30ddde8251befa.
2022-01-23 21:49:51 +09:00
19303a546e Fix the statfs issue, using f_frsize instead 2022-01-23 21:49:51 +09:00
4d117fd0af Add instructions for Windows compilation 2022-01-23 21:49:51 +09:00
2bf84fc705 Ignore .exe files 2022-01-23 21:49:51 +09:00
70692ee770 Run autoupdate 2022-01-23 21:49:51 +09:00
6370e150dd Disable features that causes problems on Windows 2022-01-23 21:49:51 +09:00
b14e39815b Use polyfills in MSYS2 environment 2022-01-23 21:49:51 +09:00
6aaf9433a5 Add polyfills for MSYS2 environment 2022-01-23 21:49:51 +09:00
46014397d8 Added test by a shell script static analysis tool(ShellCheck) 2022-01-22 22:23:08 +09:00
93d1c30d4d Use XML parsing with PUT HTTP 200 responses (#1858)
This works around the missing strcasestr on win32.  References #728.
2022-01-14 16:10:22 +09:00
6300859c80 Prefer = over == for older shell compatibility (#1857) 2022-01-14 12:40:55 +09:00
2892d3b755 Call curl --fail to propagate exit code (#1856) 2022-01-14 11:52:52 +09:00
25012f3839 Fix typo in -o enable_unsigned_payload 2022-01-12 22:50:49 +09:00
3dfc1184ca Remove python2 from bullseye 2022-01-10 19:34:36 +09:00
53d1b04cc2 Add new suppressions for clang-tidy 13 (#1847) 2022-01-09 21:42:36 +09:00
b67465b91d Specify C++03 for CI (#1850) 2022-01-09 20:48:09 +09:00
cba65fc51a Remove Python 2 (#1849) 2022-01-09 20:37:15 +09:00
75b16c72aa Build s3fs in parallel (#1848)
GitHub runners provide 2 Linux CPUs or 3 macOS CPUs:

https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources
2022-01-09 20:22:49 +09:00
577e2bc987 Generate S3Proxy SSL certificate during tests (#1845)
Also provide CA bundle to AWS CLI to work around CI failures instead
of ignoring errors.  Fixes #1812.
2022-01-09 15:13:36 +09:00
adb58af17b Annotate local variables (#1844)
This prevents collisions with other globals.  Fixes #1843.
2022-01-09 13:03:36 +09:00
dd11de3a50 Add Debian Bullseye to CI (#1842)
Stretch is supported until June 2022:

https://wiki.debian.org/LTS
2022-01-09 12:11:00 +09:00
fc7543fa25 Make ut_test compatible with Python 3 (#1838)
This may allow removing Python 2 on newer distros.

Co-authored-by: Takeshi Nakatani <ggtakec@gmail.com>
2022-01-09 11:08:36 +09:00
a44ea7c12a Replace write_multiple_offsets.py with write_multiblock (#1837)
This reduces the dependency on Python 2.
2022-01-09 10:51:17 +09:00
e734763002 Remove createbucket option (#1841)
AWS CLI can do this.  Fixes #1840.
2022-01-05 01:59:31 +09:00
37af08bacf Use JRE instead of JDK for Debian-based distros (#1839)
This reduces the dependencies installed.  It does not appear that
Fedora-based distros support this.

Co-authored-by: Takeshi Nakatani <ggtakec@gmail.com>
2022-01-05 01:29:06 +09:00
616db5bf1c Prefer curl over wget for fetching dependencies (#1836)
The former is lighter-weight and libcurl is already a dependency for
s3fs.
2022-01-05 00:43:36 +09:00
9ad9c382f4 Update README.md
Fixed install error
2022-01-04 22:44:49 +09:00
1d090aa7a3 Install default-jdk-headless instead of default-jdk
This reduces the dependencies installed.
2022-01-04 18:18:24 +09:00
61e9029be4 Fix typos in CI scripts 2022-01-03 19:50:13 +09:00
5de92e9788 Bump CI to Fedora 35 (#1806) 2021-12-02 23:45:19 +09:00
85ca2a3e45 fix mixupload return EntityTooSmall while a copypart is less than 5MB after split (#1809)
* fix  mixupload return EntityTooSmall while a copypart is less than 5MB after split
* fix possible part exceeds 5GB when multipart_copy_size is set to 5120MB
* Update curl.cpp
Co-authored-by: liubingrun <liubr1@chinatelecom.cn>
2021-11-27 16:53:26 +09:00
07e2e3f72a Remove sleep 1 from test_update_directory_time (#1803)
Reduces per-flag test run-time by 5 seconds.
2021-11-04 08:16:40 +09:00
3cf00626a2 Add option to allow unsigned payloads (#1801)
This reduces CPU usage of sigv4.  This reduces test run-time by 7
seconds per flag.
2021-11-01 23:33:55 +09:00
e289915dcb Remove require-root script (#1800)
Tests do not require this.
2021-10-31 10:48:15 +09:00
06dec32965 Use AWS CLI to create explicit times in the past (#1797)
s3fs can also do this via utimensat but tests should not trust this.
Also break tests into individual functions.  This further reduces test
run-time 8 seconds per flag.
2021-10-30 10:54:18 +09:00
86317dd185 Replace dd if=/dev/urandom with junk data generator (#1786)
This reduces test run time for a single flag from 73 to 60 seconds.
2021-10-28 22:54:25 +09:00
473da56abf Use default JDK instead of forcing Java 8 (#1796)
S3Proxy requires Java 8 or later, not 8 specifically.
2021-10-28 22:27:48 +09:00
162ab14517 Bump Ubuntu CI to latest non-LTS version (#1794) 2021-10-28 22:10:20 +09:00
40d2e0d1ad Reduce sleep time to 1 (#1793)
This reduces test run-time 15 seconds per flag or 2.5 minutes when
testing all flags.
2021-10-27 23:47:08 +09:00
b6c5069ef7 Fixed the test is multi-block writing by one flush 2021-10-27 08:19:05 +09:00
7273d561f5 Added exclusive control of statc variables in s3fs xml parser 2021-10-27 08:18:19 +09:00
78126aea0b Added exclusive control of statc variables in s3fs xml parser 2021-10-27 08:18:19 +09:00
7892eee207 Fixed a bug that copied without considering the length of xmlChar 2021-10-27 08:18:19 +09:00
72a9f40f3f Update to S3Proxy 1.9.0 (#1788)
Notably this fixes an issue with the transient provider reading parts
of large files.

Release notes:

https://github.com/gaul/s3proxy/releases/tag/s3proxy-1.9.0
2021-10-26 23:20:52 +09:00
495d51113c Remove unneeded sleeps from tests (#1784)
Also use a unique file name for every test.  This ensures that tests
like test_external_directory_creation and test_external_modification
do not collide.
2021-10-26 23:19:14 +09:00
0abeec9cae Simpily errexit modifications 2021-10-26 21:47:36 +09:00
ea3c21f270 Reduce errexit modifications (#1785)
This is less error prone but requires some magic && ||.
2021-10-25 23:53:45 +09:00
23fe6f4dee Fixed parse_string function in write_multiblock.cc 2021-10-25 17:56:49 +09:00
34ea2acd75 Add a test that is multi-block writing by one flush 2021-10-25 17:56:49 +09:00
ea64886469 Fixed a bug in test_(zero_)cache_file_stat test function 2021-10-24 18:24:12 +09:00
023aaf7dff Fixed wrong stat of cache after new creation file
And added a test for stat of cache after new creation file
2021-10-17 16:10:14 +09:00
2f412804e2 Fixed forgetting to clear the dirty flag for meta information
Addressed an error in macos cpp check
2021-10-15 22:54:55 +09:00
d6ffd389da Excluded ubuntu 16.04 from the CI build execution environment 2021-10-15 08:54:13 +09:00
be0b17329a Fix wrong function name in log message (#1774) 2021-10-10 11:08:32 +09:00
b4edad86d6 remove Expect: 100-continue header when requesting an IMDSv2 access token 2021-09-09 08:12:36 +09:00
9d1552a54e fix IAM role retrieval from IMDSv2
AWS IMDSv2 support was added in #1462, but the implementation did not
cover the addional IMDS access that occurs with the iam_role=auto
configuration.  This change implements IMDSv2 support for the IMDS
call to determine the instance's role name.

See also
https://stackoverflow.com/questions/69031023/how-to-make-s3fs-use-imds-v2-when-mounting-s3-buckets-from-ec2-instance
2021-09-03 20:36:34 +09:00
47ebfcc60a Consume return value from curl_easy_setopt (#1759)
Found via Coverity.
2021-09-02 08:07:06 +09:00
beecf32dff fclose(FILE*) instead of close(fileno(FILE*)) (#1758)
This is the same thing but confuses Coverity.
2021-09-01 19:41:55 +09:00
57b2e4a4f1 Fix 32-bit compilation issues (#1757) 2021-08-31 19:36:02 +09:00
48817d849f Require explicit length in s3fs_decode64 (#1755)
This is available from std::string::size in callers.
2021-08-31 09:22:10 +09:00
d9f2d17040 1. fix RowFlush can not upload last part smaller than 5MB using NoCacheMultipartPost; (#1753)
2. fix deadlock in UploadPendingMeta
2021-08-31 00:41:47 +09:00
cd98afdd7b Do not NUL terminate base64 decoded output (#1752)
This is binary data and must use the explicit length.
2021-08-31 00:15:47 +09:00
dac6885fb0 Don't over-allocate in base64 encoding and decoding (#1751) 2021-08-30 00:03:10 +09:00
fcd180891b fix misuse of IsUploading (#1747)
Co-authored-by: liubingrun <liubr1@chinatelecom.cn>
2021-08-29 23:41:02 +09:00
d5d541c7f7 Adding FreeBSD example to README
S3FS has existing on FreeBSD since 2009, and should be reflected here that it is well supported.
2021-08-26 10:18:56 +09:00
a868c0656e Changed etaglist_t from string list to new structure etagpairs list 2021-08-16 09:27:12 +09:00
60 changed files with 4677 additions and 2560 deletions

View File

@ -2,6 +2,8 @@ Checks: '
-*,
bugprone-*,
-bugprone-branch-clone,
-bugprone-easily-swappable-parameters,
-bugprone-implicit-widening-of-multiplication-result,
-bugprone-macro-parentheses,
-bugprone-narrowing-conversions,
-bugprone-unhandled-self-assignment,
@ -20,6 +22,7 @@ Checks: '
-modernize-avoid-c-arrays,
-modernize-deprecated-headers,
-modernize-loop-convert,
-modernize-return-braced-init-list,
-modernize-use-auto,
-modernize-use-nullptr,
-modernize-use-trailing-return-type,

View File

@ -32,12 +32,6 @@ on:
#
# Jobs
#
# [NOTE]
# Some tests using awscli may output a python warning.
# The warning is about HTTPS connections using self-signed certificates.
# That's why the PYTHONWARNINGS environment variable disables the
# "Unverified HTTPS request" warning.
#
jobs:
Linux:
runs-on: ubuntu-latest
@ -56,15 +50,15 @@ jobs:
#
matrix:
container:
- ubuntu:21.04
- ubuntu:21.10
- ubuntu:20.04
- ubuntu:18.04
- ubuntu:16.04
- debian:bullseye
- debian:buster
- debian:stretch
- centos:centos8
- rockylinux:8
- centos:centos7
- fedora:34
- fedora:35
- opensuse/leap:15
container:
@ -78,12 +72,6 @@ jobs:
#
DEBIAN_FRONTEND: noninteractive
# [NOTE]
# Since using a self-signed certificate and have not registered a certificate authority,
# we get a warning in python, so we suppress it(by PYTHONWARNINGS).
#
PYTHONWARNINGS: "ignore:Unverified HTTPS request"
steps:
# [NOTE]
# On openSUSE, tar and gzip must be installed before action/checkout.
@ -99,7 +87,7 @@ jobs:
# Matters that depend on OS:VERSION are determined and executed in the following script.
# Please note that the option to configure (CONFIGURE_OPTIONS) is set in the environment variable.
#
- name: Install pacakagse
- name: Install packages
run: |
.github/workflows/linux-ci-helper.sh ${{ matrix.container }}
@ -107,11 +95,18 @@ jobs:
run: |
./autogen.sh
/bin/sh -c "./configure ${CONFIGURE_OPTIONS}"
make
make --jobs=$(nproc)
- name: Cppcheck
run: |
make cppcheck
# work around resource leak false positives on older Linux distributions
if cppcheck --version | awk '{if ($2 <= 1.86) { exit(1) } }'; then
make cppcheck
fi
- name: Shellcheck
run: |
make shellcheck
- name: Test suite
run: |
@ -119,21 +114,14 @@ jobs:
# [NOTE]
# A case of "runs-on: macos-11.0" does not work,
# becase load_osxfuse returns exit code = 1.
# because load_osxfuse returns exit code = 1.
# Maybe it needs to reboot. Apple said
# "Installing a new kernel extension requires signing in as an Admin user. You must also restart your Mac to load the extension".
# Then we do not use macos 11 on Github Actions now.
# Then we do not use macos 11 on GitHub Actions now.
#
macos10:
runs-on: macos-10.15
env:
# [NOTE]
# Since using a self-signed certificate and have not registered a certificate authority,
# we get a warning in python, so we suppress it(by PYTHONWARNINGS).
#
PYTHONWARNINGS: "ignore:Unverified HTTPS request"
steps:
- name: Checkout source code
uses: actions/checkout@v2
@ -150,7 +138,7 @@ jobs:
- name: Install brew other packages
run: |
S3FS_BREW_PACKAGES='automake cppcheck python3 coreutils gnu-sed';
S3FS_BREW_PACKAGES='automake cppcheck python3 coreutils gnu-sed shellcheck';
for s3fs_brew_pkg in ${S3FS_BREW_PACKAGES}; do if brew list | grep -q ${s3fs_brew_pkg}; then if brew outdated | grep -q ${s3fs_brew_pkg}; then HOMEBREW_NO_AUTO_UPDATE=1 brew upgrade ${s3fs_brew_pkg}; fi; else HOMEBREW_NO_AUTO_UPDATE=1 brew install ${s3fs_brew_pkg}; fi; done;
- name: Install awscli
@ -165,12 +153,16 @@ jobs:
run: |
./autogen.sh
PKG_CONFIG_PATH=/usr/local/opt/curl/lib/pkgconfig:/usr/local/opt/openssl/lib/pkgconfig ./configure CXXFLAGS='-std=c++03 -DS3FS_PTHREAD_ERRORCHECK=1'
make
make --jobs=$(sysctl -n hw.ncpu)
- name: Cppcheck
run: |
make cppcheck
- name: Shellcheck
run: |
make shellcheck
- name: Test suite
run: |
make check -C src

View File

@ -24,7 +24,7 @@ echo "${PRGNAME} [INFO] Start Linux helper for installing packages."
#-----------------------------------------------------------
# Common variables
#-----------------------------------------------------------
PRGNAME=`basename $0`
PRGNAME=$(basename "$0")
#-----------------------------------------------------------
# Parameter check
@ -40,8 +40,10 @@ fi
# Container OS variables
#-----------------------------------------------------------
CONTAINER_FULLNAME=$1
CONTAINER_OSNAME=`echo ${CONTAINER_FULLNAME} | sed 's/:/ /g' | awk '{print $1}'`
CONTAINER_OSVERSION=`echo ${CONTAINER_FULLNAME} | sed 's/:/ /g' | awk '{print $2}'`
# shellcheck disable=SC2034
CONTAINER_OSNAME=$(echo "${CONTAINER_FULLNAME}" | sed 's/:/ /g' | awk '{print $1}')
# shellcheck disable=SC2034
CONTAINER_OSVERSION=$(echo "${CONTAINER_FULLNAME}" | sed 's/:/ /g' | awk '{print $2}')
#-----------------------------------------------------------
# Common variables for pip
@ -53,102 +55,112 @@ INSTALL_AWSCLI_PACKAGES="awscli"
#-----------------------------------------------------------
# Parameters for configure(set environments)
#-----------------------------------------------------------
CONFIGURE_OPTIONS="CXXFLAGS='-std=c++11 -DS3FS_PTHREAD_ERRORCHECK=1' --prefix=/usr --with-openssl"
# shellcheck disable=SC2089
CONFIGURE_OPTIONS="CXXFLAGS='-O -std=c++03 -DS3FS_PTHREAD_ERRORCHECK=1' --prefix=/usr --with-openssl"
#-----------------------------------------------------------
# OS dependent variables
#-----------------------------------------------------------
if [ "${CONTAINER_FULLNAME}" = "ubuntu:21.04" ]; then
if [ "${CONTAINER_FULLNAME}" = "ubuntu:21.10" ]; then
PACKAGE_MANAGER_BIN="apt-get"
PACKAGE_UPDATE_OPTIONS="update -y -qq"
INSTALL_PACKAGES="autoconf autotools-dev fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr wget python2 python3-pip"
INSTALL_CPPCHECK_OPTIONS=""
INSTALL_JDK_PACKAGES="openjdk-8-jdk"
INSTALL_PACKAGES="autoconf autotools-dev default-jre-headless fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr curl python3-pip"
INSTALL_CHECKER_PKGS="cppcheck shellcheck"
INSTALL_CHECKER_PKG_OPTIONS=""
elif [ "${CONTAINER_FULLNAME}" = "ubuntu:20.04" ]; then
PACKAGE_MANAGER_BIN="apt-get"
PACKAGE_UPDATE_OPTIONS="update -y -qq"
INSTALL_PACKAGES="autoconf autotools-dev fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr wget python2 python3-pip"
INSTALL_CPPCHECK_OPTIONS=""
INSTALL_JDK_PACKAGES="openjdk-8-jdk"
INSTALL_PACKAGES="autoconf autotools-dev default-jre-headless fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr curl python3-pip"
INSTALL_CHECKER_PKGS="cppcheck shellcheck"
INSTALL_CHECKER_PKG_OPTIONS=""
elif [ "${CONTAINER_FULLNAME}" = "ubuntu:18.04" ]; then
PACKAGE_MANAGER_BIN="apt-get"
PACKAGE_UPDATE_OPTIONS="update -y -qq"
INSTALL_PACKAGES="autoconf autotools-dev fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr wget python3-pip"
INSTALL_CPPCHECK_OPTIONS=""
INSTALL_JDK_PACKAGES="openjdk-8-jdk"
INSTALL_PACKAGES="autoconf autotools-dev default-jre-headless fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr curl python3-pip"
INSTALL_CHECKER_PKGS="cppcheck shellcheck"
INSTALL_CHECKER_PKG_OPTIONS=""
elif [ "${CONTAINER_FULLNAME}" = "ubuntu:16.04" ]; then
PACKAGE_MANAGER_BIN="apt-get"
PACKAGE_UPDATE_OPTIONS="update -y -qq"
INSTALL_PACKAGES="autoconf autotools-dev fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr wget python3-pip"
INSTALL_CPPCHECK_OPTIONS=""
INSTALL_JDK_PACKAGES="openjdk-8-jdk"
INSTALL_PACKAGES="autoconf autotools-dev default-jre-headless fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr curl python3-pip"
INSTALL_CHECKER_PKGS="cppcheck shellcheck"
INSTALL_CHECKER_PKG_OPTIONS=""
elif [ "${CONTAINER_FULLNAME}" = "debian:bullseye" ]; then
PACKAGE_MANAGER_BIN="apt-get"
PACKAGE_UPDATE_OPTIONS="update -y -qq"
INSTALL_PACKAGES="autoconf autotools-dev default-jre-headless fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr curl procps python3-pip"
INSTALL_CHECKER_PKGS="cppcheck shellcheck"
INSTALL_CHECKER_PKG_OPTIONS=""
elif [ "${CONTAINER_FULLNAME}" = "debian:buster" ]; then
PACKAGE_MANAGER_BIN="apt-get"
PACKAGE_UPDATE_OPTIONS="update -y -qq"
INSTALL_PACKAGES="autoconf autotools-dev fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr wget python2 procps python3-pip"
INSTALL_CPPCHECK_OPTIONS=""
INSTALL_JDK_PACKAGES="adoptopenjdk-8-hotspot"
INSTALL_PACKAGES="autoconf autotools-dev default-jre-headless fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr curl procps python3-pip"
INSTALL_CHECKER_PKGS="cppcheck shellcheck"
INSTALL_CHECKER_PKG_OPTIONS=""
elif [ "${CONTAINER_FULLNAME}" = "debian:stretch" ]; then
PACKAGE_MANAGER_BIN="apt-get"
PACKAGE_UPDATE_OPTIONS="update -y -qq"
INSTALL_PACKAGES="autoconf autotools-dev fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr wget procps python3-pip"
INSTALL_CPPCHECK_OPTIONS=""
INSTALL_JDK_PACKAGES="openjdk-8-jdk"
INSTALL_PACKAGES="autoconf autotools-dev default-jre-headless fuse libfuse-dev libcurl4-openssl-dev libxml2-dev locales-all mime-support libtool pkg-config libssl-dev attr curl procps python3-pip"
INSTALL_CHECKER_PKGS="cppcheck shellcheck"
INSTALL_CHECKER_PKG_OPTIONS=""
elif [ "${CONTAINER_FULLNAME}" = "centos:centos8" ]; then
elif [ "${CONTAINER_FULLNAME}" = "rockylinux:8" ]; then
PACKAGE_MANAGER_BIN="dnf"
PACKAGE_UPDATE_OPTIONS="update -y -qq"
INSTALL_PACKAGES="gcc libstdc++-devel gcc-c++ glibc-langpack-en fuse fuse-devel curl-devel libxml2-devel mailcap git automake make openssl-devel attr diffutils wget python2 python3"
INSTALL_CPPCHECK_OPTIONS="--enablerepo=powertools"
INSTALL_JDK_PACKAGES="java-1.8.0-openjdk"
# [NOTE]
# Add -O2 to prevent the warning '_FORTIFY_SOURCE requires compiling with optimization(-O)'.
# Installing ShellCheck on Rocky Linux is not easy.
# Give up to run ShellCheck on Rocky Linux as we don't have to run ShellChek on all operating systems.
#
CONFIGURE_OPTIONS="CXXFLAGS='-O2 -std=c++11 -DS3FS_PTHREAD_ERRORCHECK=1' --prefix=/usr --with-openssl"
INSTALL_PACKAGES="curl-devel fuse fuse-devel gcc libstdc++-devel gcc-c++ glibc-langpack-en java-11-openjdk-headless libxml2-devel mailcap git automake make openssl-devel attr diffutils curl python3"
INSTALL_CHECKER_PKGS="cppcheck"
INSTALL_CHECKER_PKG_OPTIONS="--enablerepo=powertools"
elif [ "${CONTAINER_FULLNAME}" = "centos:centos7" ]; then
PACKAGE_MANAGER_BIN="yum"
PACKAGE_UPDATE_OPTIONS="update -y"
INSTALL_PACKAGES="gcc libstdc++-devel gcc-c++ glibc-langpack-en fuse fuse-devel curl-devel libxml2-devel mailcap git automake make openssl-devel attr wget python3 epel-release"
INSTALL_CPPCHECK_OPTIONS="--enablerepo=epel"
INSTALL_JDK_PACKAGES="java-1.8.0-openjdk"
# [NOTE]
# Add -O2 to prevent the warning '_FORTIFY_SOURCE requires compiling with optimization(-O)'.
# ShellCheck version(0.3.8) is too low to check.
# And in this version, it cannot be passed due to following error.
# "shellcheck: ./test/integration-test-main.sh: hGetContents: invalid argument (invalid byte sequence)"
#
CONFIGURE_OPTIONS="CXXFLAGS='-O2 -std=c++11 -DS3FS_PTHREAD_ERRORCHECK=1' --prefix=/usr --with-openssl"
INSTALL_PACKAGES="curl-devel fuse fuse-devel gcc libstdc++-devel gcc-c++ glibc-langpack-en java-11-openjdk-headless libxml2-devel mailcap git automake make openssl-devel attr curl python3 epel-release"
INSTALL_CHECKER_PKGS="cppcheck"
INSTALL_CHECKER_PKG_OPTIONS="--enablerepo=epel"
elif [ "${CONTAINER_FULLNAME}" = "fedora:34" ]; then
elif [ "${CONTAINER_FULLNAME}" = "fedora:35" ]; then
PACKAGE_MANAGER_BIN="dnf"
PACKAGE_UPDATE_OPTIONS="update -y -qq"
INSTALL_PACKAGES="gcc libstdc++-devel gcc-c++ glibc-langpack-en fuse fuse-devel curl-devel libxml2-devel mailcap git automake make openssl-devel wget attr diffutils python2 procps python3-pip"
INSTALL_CPPCHECK_OPTIONS=""
INSTALL_JDK_PACKAGES="java-1.8.0-openjdk"
# TODO: Cannot use java-latest-openjdk (17) due to modules issue in S3Proxy/jclouds/Guice
INSTALL_PACKAGES="curl-devel fuse fuse-devel gcc libstdc++-devel gcc-c++ glibc-langpack-en java-11-openjdk-headless libxml2-devel mailcap git automake make openssl-devel curl attr diffutils procps python3-pip"
INSTALL_CHECKER_PKGS="cppcheck ShellCheck"
INSTALL_CHECKER_PKG_OPTIONS=""
elif [ "${CONTAINER_FULLNAME}" = "opensuse/leap:15" ]; then
PACKAGE_MANAGER_BIN="zypper"
PACKAGE_UPDATE_OPTIONS="refresh"
INSTALL_PACKAGES="automake curl-devel fuse fuse-devel gcc-c++ libxml2-devel make openssl-devel python3-pip wget attr"
INSTALL_CPPCHECK_OPTIONS=""
INSTALL_JDK_PACKAGES="java-1_8_0-openjdk"
INSTALL_PACKAGES="automake curl-devel fuse fuse-devel gcc-c++ java-11-openjdk-headless libxml2-devel make openssl-devel python3-pip curl attr ShellCheck"
INSTALL_CHECKER_PKGS="cppcheck ShellCheck"
INSTALL_CHECKER_PKG_OPTIONS=""
else
echo "No container configured for: ${CONTAINER_FULLNAME}"
exit 1
fi
@ -159,53 +171,37 @@ fi
# Update packages (ex. apt-get update -y -qq)
#
echo "${PRGNAME} [INFO] Updates."
${PACKAGE_MANAGER_BIN} ${PACKAGE_UPDATE_OPTIONS}
/bin/sh -c "${PACKAGE_MANAGER_BIN} ${PACKAGE_UPDATE_OPTIONS}"
#
# Install pacakages ( with cppcheck )
# Install packages ( with cppcheck )
#
echo "${PRGNAME} [INFO] Install packages."
${PACKAGE_MANAGER_BIN} install -y ${INSTALL_PACKAGES}
/bin/sh -c "${PACKAGE_MANAGER_BIN} install -y ${INSTALL_PACKAGES}"
echo "${PRGNAME} [INFO] Install cppcheck package."
${PACKAGE_MANAGER_BIN} ${INSTALL_CPPCHECK_OPTIONS} install -y cppcheck
/bin/sh -c "${PACKAGE_MANAGER_BIN} ${INSTALL_CHECKER_PKG_OPTIONS} install -y ${INSTALL_CHECKER_PKGS}"
#
# Install JDK 1.8
#
# [NOTE]
# Now, the previous Java LTS version 8 is not available in the official Debian Buster repositories.
# It'll enable the AdoptOpenJDK repository, which provides prebuilt OpenJDK packages.
#
echo "${PRGNAME} [INFO] Install JDK 1.8 package."
if [ "${CONTAINER_FULLNAME}" != "debian:buster" ]; then
${PACKAGE_MANAGER_BIN} install -y ${INSTALL_JDK_PACKAGES}
else
# [NOTE]
# Debian Buster is special case for installing JDK.
#
${PACKAGE_MANAGER_BIN} install -y apt-transport-https ca-certificates dirmngr gnupg software-properties-common
wget -qO - https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public | apt-key add -
add-apt-repository --yes https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/
${PACKAGE_MANAGER_BIN} ${PACKAGE_UPDATE_OPTIONS}
${PACKAGE_MANAGER_BIN} install -y ${INSTALL_JDK_PACKAGES}
fi
# Check Java version
java -version
#
# Install awscli
#
echo "${PRGNAME} [INFO] Install awscli package."
${PIP_BIN} install ${PIP_OPTIONS} ${INSTALL_AWSCLI_PACKAGES}
${PIP_BIN} install ${PIP_OPTIONS} rsa
/bin/sh -c "${PIP_BIN} install ${PIP_OPTIONS} ${INSTALL_AWSCLI_PACKAGES}"
/bin/sh -c "${PIP_BIN} install ${PIP_OPTIONS} rsa"
#-----------------------------------------------------------
# Set environment for configure
#-----------------------------------------------------------
echo "${PRGNAME} [INFO] Set environment for configure options"
# shellcheck disable=SC2090
export CONFIGURE_OPTIONS
echo "${PRGNAME} [INFO] Finish Linux helper for installing packages."
exit 0
#

16
.gitignore vendored
View File

@ -56,6 +56,11 @@ test-driver
compile
missing
#
# man page
#
doc/man/s3fs.1
#
# object directories
#
@ -77,7 +82,18 @@ src/test_curl_util
src/test_page_list
src/test_string_util
test/chaos-http-proxy-*
test/junk_data
test/s3proxy-*
test/write_multiblock
#
# Windows ports
#
*.dll
*.exe
fuse.pc
WinFsp/
bin/
#
# Local variables:

View File

@ -32,3 +32,58 @@ cd s3fs-fuse
make
sudo make install
```
## Compilation on Windows (using MSYS2)
On Windows, use [MSYS2](https://www.msys2.org/) to compile for itself.
1. Install [WinFsp](https://github.com/billziss-gh/winfsp) to your machine.
2. Install dependencies onto MSYS2:
```sh
pacman -S git autoconf automake gcc make pkg-config libopenssl-devel libcurl-devel libxml2-devel libzstd-devel
```
3. Clone this repository, then change directory into the cloned one.
4. Copy WinFsp files to the directory:
```sh
cp -r "/c/Program Files (x86)/WinFsp" "./WinFsp"
```
5. Write `fuse.pc` to resolve the package correctly:
```sh
cat > ./fuse.pc << 'EOS'
arch=x64
prefix=${pcfiledir}/WinFsp
incdir=${prefix}/inc/fuse
implib=${prefix}/bin/winfsp-${arch}.dll
Name: fuse
Description: WinFsp FUSE compatible API
Version: 2.8.4
URL: http://www.secfs.net/winfsp/
Libs: "${implib}"
Cflags: -I"${incdir}"
EOS
```
6. Compile using the command line:
```sh
./autogen.sh
PKG_CONFIG_PATH="$PKG_CONFIG_PATH:$(pwd)" ./configure
make
```
7. Copy binary files to distribute at one place:
```sh
mkdir ./bin
cp ./src/s3fs.exe ./bin/
cp ./WinFsp/bin/winfsp-x64.dll ./bin/
cp /usr/bin/msys-*.dll ./bin/
```
8. Distribute these files.

View File

@ -1,6 +1,18 @@
ChangeLog for S3FS
------------------
Version 1.91 -- 07 Mar, 2022 (major changes only)
#1753 - Fix RowFlush can not upload last part smaller than 5MB using NoCacheMultipartPost
#1760 - Fix IAM role retrieval from IMDSv2
#1801 - Add option to allow unsigned payloads
#1809 - Fix mixupload return EntityTooSmall while a copypart is less than 5MB after split
#1855 - Allow compilation on Windows via MSYS2
#1868 - Handle utimensat UTIME_NOW and UTIME_OMIT special values
#1871 - #1880 - Preserve sub-second precision in more situations
#1879 - Always flush open files with O_CREAT flag
#1887 - Fixed not to call Flush even if the file size is increased
#1888 - Include climits to support musl libc
Version 1.90 -- 07 Aug, 2021 (major changes only)
#1599 - Don't ignore nomultipart when storage is low
#1600 - #1602 - #1604 - #1617 - #1619 - #1620 - #1623 - #1624 - Fix POSIX compatibility issues found by pjdfstest

View File

@ -17,6 +17,7 @@
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
######################################################################
SUBDIRS=src test doc
EXTRA_DIST=doc default_commit_hash
@ -28,6 +29,8 @@ dist-hook:
release : dist ../utils/release.sh
../utils/release.sh $(DIST_ARCHIVES)
.PHONY: cppcheck shellcheck
cppcheck:
cppcheck --quiet --error-exitcode=1 \
--inline-suppr \
@ -43,6 +46,35 @@ cppcheck:
--suppress=unmatchedSuppression \
src/ test/
#
# ShellCheck
#
SHELLCHECK_CMD = shellcheck
SHELLCHECK_SH_OPT = --shell=sh
SHELLCHECK_BASH_OPT = --shell=bash
# [NOTE]
# To control error warnings as a whole, specify the "SC<number>" with the following variables.
#
SHELLCHECK_COMMON_IGN = --exclude=SC1091
SHELLCHECK_CUSTOM_IGN = --exclude=SC1091
shellcheck:
@if type shellcheck > /dev/null 2>&1; then \
echo "* ShellCheck version"; \
$(SHELLCHECK_CMD) --version; \
echo ""; \
echo "* Check all sh files with ShellCheck"; \
LC_ALL=C.UTF-8 $(SHELLCHECK_CMD) $(SHELLCHECK_SH_OPT) $(SHELLCHECK_COMMON_IGN) $$(grep '#![[:space:]]*/bin/sh' $$(find . -type f -name \*.sh) | sed -e 's|^\(.*\):#\!.*$$|\1|g') || exit 1; \
echo "-> No error was detected."; \
echo ""; \
echo "* Check all bash files with ShellCheck"; \
LC_ALL=C.UTF-8 $(SHELLCHECK_CMD) $(SHELLCHECK_BASH_OPT) $(SHELLCHECK_COMMON_IGN) $$(grep '#![[:space:]]*/bin/bash' $$(find . -type f -name \*.sh) | sed -e 's|^\(.*\):#\!.*$$|\1|g') || exit 1; \
echo "-> No error was detected."; \
else \
echo "* ShellCheck is not installed, so skip this."; \
fi
#
# Local variables:
# tab-width: 4

View File

@ -1,6 +1,6 @@
# s3fs
s3fs allows Linux and macOS to mount an S3 bucket via FUSE.
s3fs allows Linux, macOS, and FreeBSD to mount an S3 bucket via FUSE.
s3fs preserves the native object format for files, allowing use of other
tools like [AWS CLI](https://github.com/aws/aws-cli).
[![s3fs-fuse CI](https://github.com/s3fs-fuse/s3fs-fuse/workflows/s3fs-fuse%20CI/badge.svg)](https://github.com/s3fs-fuse/s3fs-fuse/actions)
@ -68,11 +68,17 @@ Many systems provide pre-built packages:
sudo zypper install s3fs
```
* macOS via [Homebrew](https://brew.sh/):
* macOS 10.12 and newer via [Homebrew](https://brew.sh/):
```
brew install --cask osxfuse
brew install s3fs
brew install gromgit/fuse/s3fs-mac
```
* FreeBSD:
```
pkg install fusefs-s3fs
```
Note: Homebrew has deprecated osxfuse and s3fs may not install any more, see

View File

@ -1,5 +1,5 @@
#! /bin/sh
#!/bin/sh
#
# This file is part of S3FS.
#
# Copyright 2009, 2010 Free Software Foundation, Inc.
@ -22,14 +22,12 @@
echo "--- Make commit hash file -------"
SHORTHASH="unknown"
type git > /dev/null 2>&1
if [ $? -eq 0 -a -d .git ]; then
RESULT=`git rev-parse --short HEAD`
if [ $? -eq 0 ]; then
SHORTHASH=${RESULT}
if command -v git > /dev/null 2>&1 && test -d .git; then
if RESULT=$(git rev-parse --short HEAD); then
SHORTHASH="${RESULT}"
fi
fi
echo ${SHORTHASH} > default_commit_hash
echo "${SHORTHASH}" > default_commit_hash
echo "--- Finished commit hash file ---"

View File

@ -20,7 +20,7 @@
dnl Process this file with autoconf to produce a configure script.
AC_PREREQ(2.59)
AC_INIT(s3fs, 1.90)
AC_INIT(s3fs, 1.91)
AC_CONFIG_HEADER([config.h])
AC_CANONICAL_SYSTEM
@ -310,15 +310,24 @@ AC_COMPILE_IFELSE(
]
)
dnl ----------------------------------------------
dnl build date
dnl ----------------------------------------------
AC_SUBST([MAN_PAGE_DATE], [$(date +"%B %Y")])
dnl ----------------------------------------------
dnl output files
dnl ----------------------------------------------
AC_CONFIG_FILES(Makefile src/Makefile test/Makefile doc/Makefile)
AC_CONFIG_FILES(Makefile
src/Makefile
test/Makefile
doc/Makefile
doc/man/s3fs.1)
dnl ----------------------------------------------
dnl short commit hash
dnl ----------------------------------------------
AC_CHECK_PROG([GITCMD], [git version], [yes], [no])
AC_CHECK_PROG([GITCMD], [git --version], [yes], [no])
AS_IF([test -d .git], [DOTGITDIR=yes], [DOTGITDIR=no])
AC_MSG_CHECKING([github short commit hash])
@ -347,6 +356,6 @@ dnl ----------------------------------------------
# tab-width: 4
# c-basic-offset: 4
# End:
# vim600: expandtab sw=4 ts= fdm=marker
# vim600: expandtab sw=4 ts=4 fdm=marker
# vim<600: expandtab sw=4 ts=4
#

View File

@ -1,4 +1,4 @@
.TH S3FS "1" "February 2011" "S3FS" "User Commands"
.TH S3FS "1" "@MAN_PAGE_DATE@" "S3FS" "User Commands"
.SH NAME
S3FS \- FUSE-based file system backed by Amazon S3
.SH SYNOPSIS
@ -65,7 +65,7 @@ if it is not specified bucket name (and path) in command line, must specify this
.TP
\fB\-o\fR default_acl (default="private")
the default canned acl to apply to all written s3 objects, e.g., "private", "public-read".
see https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl for the full list of canned acls.
see https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl for the full list of canned ACLs.
.TP
\fB\-o\fR retries (default="5")
number of times to retry a failed S3 transaction.
@ -207,17 +207,17 @@ but lower values may improve performance.
.TP
\fB\-o\fR max_dirty_data (default="5120")
Flush dirty data to S3 after a certain number of MB written.
The minimum value is 50 MB. -1 value means disable.
The minimum value is 50 MB. -1 value means disable.
Cannot be used with nomixupload.
.TP
\fB\-o\fR ensure_diskfree (default 0)
sets MB to ensure disk free space. This option means the threshold of free space size on disk which is used for the cache file by s3fs.
s3fs makes file for downloading, uploading and caching files.
If the disk free space is smaller than this value, s3fs do not use diskspace as possible in exchange for the performance.
If the disk free space is smaller than this value, s3fs do not use disk space as possible in exchange for the performance.
.TP
\fB\-o\fR multipart_threshold (default="25")
threshold, in MB, to use multipart upload instead of
single-part. Must be at least 5 MB.
single-part. Must be at least 5 MB.
.TP
\fB\-o\fR singlepart_copy_limit (default="512")
maximum size, in MB, of a single-part copy before trying
@ -253,7 +253,7 @@ In the opposite case s3fs allows access to all users as the default.
But if you set the allow_other with this option, you can control the permissions of the mount point by this option like umask.
.TP
\fB\-o\fR umask (default is "0000")
sets umask for files under the mountpoint. This can allow
sets umask for files under the mountpoint. This can allow
users other than the mounting user to read and write to files
that they did not create.
.TP
@ -262,6 +262,9 @@ that they did not create.
\fB\-o\fR enable_content_md5 (default is disable)
Allow S3 server to check data integrity of uploads via the Content-MD5 header.
This can add CPU overhead to transfers.
\fB\-o\fR enable_unsigned_payload (default is disable)
Do not calculate Content-SHA256 for PutObject and UploadPart
payloads. This can reduce CPU overhead to transfers.
.TP
\fB\-o\fR ecs (default is disable)
This option instructs s3fs to query the ECS container credential metadata address instead of the instance metadata address.
@ -271,7 +274,7 @@ This option requires the IAM role name or "auto". If you specify "auto", s3fs wi
.TP
\fB\-o\fR imdsv1only (default is to use IMDSv2 with fallback to v1)
AWS instance metadata service, used with IAM role authentication,
supports the use of an API token. If you're using an IAM role in an
supports the use of an API token. If you're using an IAM role in an
environment that does not support IMDSv2, setting this flag will skip
retrieval and usage of the API token when retrieving IAM credentials.
.TP
@ -329,16 +332,16 @@ This name will be added to logging messages and user agent headers sent by s3fs.
s3fs complements lack of information about file/directory mode if a file or a directory object does not have x-amz-meta-mode header.
As default, s3fs does not complements stat information for a object, then the object will not be able to be allowed to list/modify.
.TP
\fB\-o\fR notsup_compat_dir (not support compatibility directory types)
As a default, s3fs supports objects of the directory type as much as possible and recognizes them as directories.
Objects that can be recognized as directory objects are "dir/", "dir", "dir_$folder$", and there is a file object that does not have a directory object but contains that directory path.
s3fs needs redundant communication to support all these directory types.
The object as the directory created by s3fs is "dir/".
By restricting s3fs to recognize only "dir/" as a directory, communication traffic can be reduced.
This option is used to give this restriction to s3fs.
However, if there is a directory object other than "dir/" in the bucket, specifying this option is not recommended.
s3fs may not be able to recognize the object correctly if an object created by s3fs exists in the bucket.
Please use this option when the directory in the bucket is only "dir/" object.
\fB\-o\fR notsup_compat_dir (disable support of alternative directory names)
.RS
s3fs supports the three different naming schemas "dir/", "dir" and "dir_$folder$" to map directory names to S3 objects and vice versa. As a fourth variant, directories can be determined indirectly if there is a file object with a path (e.g. "/dir/file") but without the parent directory.
.TP
S3fs uses only the first schema "dir/" to create S3 objects for directories.
.TP
The support for these different naming schemas causes an increased communication effort.
.TP
If all applications exclusively use the "dir/" naming scheme and the bucket does not contain any objects with a different naming scheme, this option can be used to disable support for alternative naming schemes. This reduces access time and can save costs.
.RE
.TP
\fB\-o\fR use_wtf8 - support arbitrary file system encoding.
S3 requires all object names to be valid UTF-8. But some
@ -401,7 +404,7 @@ It can be specified as year, month, day, hour, minute, second, and it is express
For example, "1Y6M10D12h30m30s".
.SH FUSE/MOUNT OPTIONS
.TP
Most of the generic mount options described in 'man mount' are supported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime, noatime, sync async, dirsync). Filesystems are mounted with '\-onodev,nosuid' by default, which can only be overridden by a privileged user.
Most of the generic mount options described in 'man mount' are supported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime, noatime, sync async, dirsync). Filesystems are mounted with '\-onodev,nosuid' by default, which can only be overridden by a privileged user.
.TP
There are many FUSE specific mount options that can be specified. e.g. allow_other. See the FUSE README for the full set.
.SH LOCAL STORAGE CONSUMPTION
@ -423,6 +426,40 @@ The amount of local cache storage used can be indirectly controlled with "\-o e
Since s3fs always requires some storage space for operation, it creates temporary files to store incoming write requests until the required s3 request size is reached and the segment has been uploaded. After that, this data is truncated in the temporary file to free up storage space.
.TP
Per file you need at least twice the part size (default 5MB or "-o multipart_size") for writing multipart requests or space for the whole file if single requests are enabled ("\-o nomultipart").
.SH PERFORMANCE CONSIDERATIONS
.TP
This section discusses settings to improve s3fs performance.
.TP
In most cases, backend performance cannot be controlled and is therefore not part of this discussion.
.TP
Details of the local storage usage is discussed in "LOCAL STORAGE CONSUMPTION".
.TP
.SS CPU and Memory Consumption
.TP
s3fs is a multi-threaded application. Depending on the workload it may use multiple CPUs and a certain amount of memory. You can monitor the CPU and memory consumption with the "top" utility.
.TP
.SS Performance of S3 requests
.TP
s3fs provides several options (e.g. "\-o multipart_size", "\-o parallel_count") to control behaviour and thus indirectly the performance. The possible combinations of these options in conjunction with the various S3 backends are so varied that there is no individual recommendation other than the default values. Improved individual settings can be found by testing and measuring.
.TP
The two options "Enable no object cache" ("\-o enable_noobj_cache") and "Disable support of alternative directory names" ("\-o notsup_compat_dir") can be used to control shared access to the same bucket by different applications:
.TP
.IP \[bu]
Enable no object cache ("\-o enable_noobj_cache")
.RS
.TP
If a bucket is used exclusively by an s3fs instance, you can enable the cache for non-existent files and directories with "\-o enable_noobj_cache". This eliminates repeated requests to check the existence of an object, saving time and possibly money.
.RE
.IP \[bu]
Disable support of alternative directory names ("\-o notsup_compat_dir")
.RS
.TP
s3fs supports "dir/", "dir" and "dir_$folder$" to map directory names to S3 objects and vice versa.
.TP
Some applications use a different naming schema for associating directory names to S3 objects. For example, Apache Hadoop uses the "dir_$folder$" schema to create S3 objects for directories.
.TP
The option "\-o notsup_compat_dir" can be set if all accessing tools use the "dir/" naming schema for directory objects and the bucket does not contain any objects with a different naming scheme. In this case, accessing directory objects saves time and possibly money because alternative schemas are not checked.
.RE
.SH NOTES
.TP
The maximum size of objects that s3fs can handle depends on Amazon S3. For example, up to 5 GB when using single PUT API. And up to 5 TB is supported when Multipart Upload API is used.

View File

@ -41,6 +41,7 @@ s3fs_SOURCES = \
s3objlist.cpp \
cache.cpp \
string_util.cpp \
s3fs_cred.cpp \
s3fs_util.cpp \
fdcache.cpp \
fdcache_entity.cpp \

View File

@ -64,7 +64,7 @@ AutoLock::~AutoLock()
if (is_lock_acquired) {
int result = pthread_mutex_unlock(auto_mutex);
if(result != 0){
S3FS_PRN_CRIT("pthread_mutex_lock returned: %d", result);
S3FS_PRN_CRIT("pthread_mutex_unlock returned: %d", result);
abort();
}
}

View File

@ -18,16 +18,15 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <cerrno>
#include <cstdio>
#include <cstdlib>
#ifndef HAVE_CLOCK_GETTIME
#include <sys/time.h>
#endif
#include <algorithm>
#include "common.h"
#include "s3fs.h"
#include "s3fs_util.h"
#include "cache.h"
#include "autolock.h"
#include "string_util.h"
@ -35,39 +34,11 @@
//-------------------------------------------------------------------
// Utility
//-------------------------------------------------------------------
#ifndef CLOCK_REALTIME
#define CLOCK_REALTIME 0
#endif
#ifndef CLOCK_MONOTONIC
#define CLOCK_MONOTONIC CLOCK_REALTIME
#endif
#ifndef CLOCK_MONOTONIC_COARSE
#define CLOCK_MONOTONIC_COARSE CLOCK_MONOTONIC
#endif
#ifdef HAVE_CLOCK_GETTIME
static int s3fs_clock_gettime(int clk_id, struct timespec* ts)
{
return clock_gettime(static_cast<clockid_t>(clk_id), ts);
}
#else
static int s3fs_clock_gettime(int clk_id, struct timespec* ts)
{
struct timeval now;
if(0 != gettimeofday(&now, NULL)){
return -1;
}
ts->tv_sec = now.tv_sec;
ts->tv_nsec = now.tv_usec * 1000;
return 0;
}
#endif
inline void SetStatCacheTime(struct timespec& ts)
{
if(-1 == s3fs_clock_gettime(CLOCK_MONOTONIC_COARSE, &ts)){
ts.tv_sec = time(NULL);
ts.tv_nsec = 0;
if(-1 == clock_gettime(static_cast<clockid_t>(CLOCK_MONOTONIC_COARSE), &ts)){
S3FS_PRN_CRIT("clock_gettime failed: %d", errno);
abort();
}
}
@ -815,8 +786,13 @@ bool convert_header_to_stat(const char* path, const headers_t& meta, struct stat
if(pst->st_mtime < 0){
pst->st_mtime = 0L;
}else{
if(mtime.tv_sec < 0){
mtime.tv_sec = 0;
mtime.tv_nsec = 0;
}
#if defined(__APPLE__)
pst->st_mtime = mtime.tv_sec;
pst->st_mtimespec.tv_nsec = mtime.tv_nsec;
#else
pst->st_mtim.tv_sec = mtime.tv_sec;
pst->st_mtim.tv_nsec = mtime.tv_nsec;
@ -828,8 +804,13 @@ bool convert_header_to_stat(const char* path, const headers_t& meta, struct stat
if(pst->st_ctime < 0){
pst->st_ctime = 0L;
}else{
if(ctime.tv_sec < 0){
ctime.tv_sec = 0;
ctime.tv_nsec = 0;
}
#if defined(__APPLE__)
pst->st_ctime = ctime.tv_sec;
pst->st_ctimespec.tv_nsec = ctime.tv_nsec;
#else
pst->st_ctim.tv_sec = ctime.tv_sec;
pst->st_ctim.tv_nsec = ctime.tv_nsec;
@ -841,8 +822,13 @@ bool convert_header_to_stat(const char* path, const headers_t& meta, struct stat
if(pst->st_atime < 0){
pst->st_atime = 0L;
}else{
if(atime.tv_sec < 0){
atime.tv_sec = 0;
atime.tv_nsec = 0;
}
#if defined(__APPLE__)
pst->st_atime = atime.tv_sec;
pst->st_atimespec.tv_nsec = atime.tv_nsec;
#else
pst->st_atim.tv_sec = atime.tv_sec;
pst->st_atim.tv_nsec = atime.tv_nsec;

View File

@ -42,12 +42,10 @@ extern bool noxmlns;
extern std::string program_name;
extern std::string service_path;
extern std::string s3host;
extern std::string bucket;
extern std::string mount_prefix;
extern std::string endpoint;
extern std::string cipher_suites;
extern std::string instance_name;
extern std::string aws_profile;
#endif // S3FS_COMMON_H_

File diff suppressed because it is too large Load Diff

View File

@ -34,6 +34,7 @@
#include "psemaphore.h"
#include "metaheader.h"
#include "fdcache_page.h"
#include "s3fs_cred.h"
//----------------------------------------------
// Avoid dependency on libcurl version
@ -83,7 +84,6 @@ class S3fsCurl;
// Prototype function for lazy setup options for curl handle
typedef bool (*s3fscurl_lazy_setup)(S3fsCurl* s3fscurl);
typedef std::map<std::string, std::string> iamcredmap_t;
typedef std::map<std::string, std::string> sseckeymap_t;
typedef std::list<sseckeymap_t> sseckeylist_t;
@ -140,24 +140,7 @@ class S3fsCurl
static bool is_content_md5;
static bool is_verbose;
static bool is_dump_body;
static std::string AWSAccessKeyId;
static std::string AWSSecretAccessKey;
static std::string AWSAccessToken;
static time_t AWSAccessTokenExpire;
static bool is_ecs;
static bool is_use_session_token;
static bool is_ibm_iam_auth;
static std::string IAM_cred_url;
static int IAM_api_version;
static std::string IAMv2_token_url;
static int IAMv2_token_ttl;
static std::string IAMv2_token_ttl_hdr;
static std::string IAMv2_token_hdr;
static std::string IAMv2_api_token;
static size_t IAM_field_count;
static std::string IAM_token_field;
static std::string IAM_expiry_field;
static std::string IAM_role;
static S3fsCred* ps3fscred;
static long ssl_verify_hostname;
static curltime_t curl_times;
static curlprogress_t curl_progress;
@ -169,6 +152,7 @@ class S3fsCurl
static off_t multipart_size;
static off_t multipart_copy_size;
static signature_type_t signature_type;
static bool is_unsigned_payload;
static bool is_ua; // User-Agent
static bool listobjectsv2;
static bool requester_pays;
@ -251,34 +235,26 @@ class S3fsCurl
static bool PreGetObjectRequestSetCurlOpts(S3fsCurl* s3fscurl);
static bool PreHeadRequestSetCurlOpts(S3fsCurl* s3fscurl);
static bool ParseIAMCredentialResponse(const char* response, iamcredmap_t& keyval);
static bool SetIAMCredentials(const char* response);
static bool SetIAMv2APIToken(const char* response);
static bool ParseIAMRoleFromMetaDataResponse(const char* response, std::string& rolename);
static bool SetIAMRoleFromMetaData(const char* response);
static bool LoadEnvSseCKeys();
static bool LoadEnvSseKmsid();
static bool PushbackSseKeys(const std::string& onekey);
static bool AddUserAgent(CURL* hCurl);
static int CurlDebugFunc(CURL* hcurl, curl_infotype type, char* data, size_t size, void* userptr);
static int CurlDebugBodyInFunc(CURL* hcurl, curl_infotype type, char* data, size_t size, void* userptr);
static int CurlDebugBodyOutFunc(CURL* hcurl, curl_infotype type, char* data, size_t size, void* userptr);
static int RawCurlDebugFunc(CURL* hcurl, curl_infotype type, char* data, size_t size, void* userptr, curl_infotype datatype);
static int CurlDebugFunc(const CURL* hcurl, curl_infotype type, char* data, size_t size, void* userptr);
static int CurlDebugBodyInFunc(const CURL* hcurl, curl_infotype type, char* data, size_t size, void* userptr);
static int CurlDebugBodyOutFunc(const CURL* hcurl, curl_infotype type, char* data, size_t size, void* userptr);
static int RawCurlDebugFunc(const CURL* hcurl, curl_infotype type, char* data, size_t size, void* userptr, curl_infotype datatype);
// methods
bool ResetHandle(bool lock_already_held = false);
bool RemakeHandle();
bool ClearInternalData();
void insertV4Headers();
void insertV2Headers();
void insertIBMIAMHeaders();
void insertV4Headers(const std::string& access_key_id, const std::string& secret_access_key, const std::string& access_token);
void insertV2Headers(const std::string& access_key_id, const std::string& secret_access_key, const std::string& access_token);
void insertIBMIAMHeaders(const std::string& access_key_id, const std::string& access_token);
void insertAuthHeaders();
std::string CalcSignatureV2(const std::string& method, const std::string& strMD5, const std::string& content_type, const std::string& date, const std::string& resource);
std::string CalcSignature(const std::string& method, const std::string& canonical_uri, const std::string& query_string, const std::string& strdate, const std::string& payload_hash, const std::string& date8601);
int GetIAMv2ApiToken();
int GetIAMCredentials();
std::string CalcSignatureV2(const std::string& method, const std::string& strMD5, const std::string& content_type, const std::string& date, const std::string& resource, const std::string& secret_access_key, const std::string& access_token);
std::string CalcSignature(const std::string& method, const std::string& canonical_uri, const std::string& query_string, const std::string& strdate, const std::string& payload_hash, const std::string& date8601, const std::string& secret_access_key, const std::string& access_token);
int UploadMultipartPostSetup(const char* tpath, int part_num, const std::string& upload_id);
int CopyMultipartPostSetup(const char* from, const char* to, int part_num, const std::string& upload_id, headers_t& meta);
bool UploadMultipartPostComplete();
@ -289,12 +265,12 @@ class S3fsCurl
public:
// class methods
static bool InitS3fsCurl();
static bool InitCredentialObject(S3fsCred* pcredobj);
static bool InitMimeType(const std::string& strFile);
static bool DestroyS3fsCurl();
static int ParallelMultipartUploadRequest(const char* tpath, headers_t& meta, int fd);
static int ParallelMixMultipartUploadRequest(const char* tpath, headers_t& meta, int fd, const fdpage_list_t& mixuppages);
static int ParallelGetObjectRequest(const char* tpath, int fd, off_t start, off_t size);
static bool CheckIAMCredentialUpdate();
// class methods(variables)
static std::string LookupMimeType(const std::string& name);
@ -331,16 +307,6 @@ class S3fsCurl
static bool GetVerbose() { return S3fsCurl::is_verbose; }
static bool SetDumpBody(bool flag);
static bool IsDumpBody() { return S3fsCurl::is_dump_body; }
static bool SetAccessKey(const char* AccessKeyId, const char* SecretAccessKey);
static bool SetAccessKeyWithSessionToken(const char* AccessKeyId, const char* SecretAccessKey, const char * SessionToken);
static bool IsSetAccessKeyID()
{
return !S3fsCurl::AWSAccessKeyId.empty();
}
static bool IsSetAccessKeys()
{
return !S3fsCurl::IAM_role.empty() || ((!S3fsCurl::AWSAccessKeyId.empty() || S3fsCurl::is_ibm_iam_auth) && !S3fsCurl::AWSSecretAccessKey.empty());
}
static long SetSslVerifyHostname(long value);
static long GetSslVerifyHostname() { return S3fsCurl::ssl_verify_hostname; }
static void ResetOffset(S3fsCurl* pCurl);
@ -350,20 +316,14 @@ class S3fsCurl
// maximum parallel HEAD requests
static int SetMaxMultiRequest(int max);
static int GetMaxMultiRequest() { return S3fsCurl::max_multireq; }
static bool SetIsECS(bool flag);
static bool SetIsIBMIAMAuth(bool flag);
static size_t SetIAMFieldCount(size_t field_count);
static std::string SetIAMCredentialsURL(const char* url);
static std::string SetIAMTokenField(const char* token_field);
static std::string SetIAMExpiryField(const char* expiry_field);
static std::string SetIAMRole(const char* role);
static const char* GetIAMRole() { return S3fsCurl::IAM_role.c_str(); }
static bool SetMultipartSize(off_t size);
static off_t GetMultipartSize() { return S3fsCurl::multipart_size; }
static bool SetMultipartCopySize(off_t size);
static off_t GetMultipartCopySize() { return S3fsCurl::multipart_copy_size; }
static signature_type_t SetSignatureType(signature_type_t signature_type) { signature_type_t bresult = S3fsCurl::signature_type; S3fsCurl::signature_type = signature_type; return bresult; }
static signature_type_t GetSignatureType() { return S3fsCurl::signature_type; }
static bool SetUnsignedPayload(bool issset) { bool bresult = S3fsCurl::is_unsigned_payload; S3fsCurl::is_unsigned_payload = issset; return bresult; }
static bool GetUnsignedPayload() { return S3fsCurl::is_unsigned_payload; }
static bool SetUserAgentFlag(bool isset) { bool bresult = S3fsCurl::is_ua; S3fsCurl::is_ua = isset; return bresult; }
static bool IsUserAgentFlag() { return S3fsCurl::is_ua; }
static void InitUserAgent();
@ -371,17 +331,18 @@ class S3fsCurl
static bool IsListObjectsV2() { return S3fsCurl::listobjectsv2; }
static bool SetRequesterPays(bool flag) { bool old_flag = S3fsCurl::requester_pays; S3fsCurl::requester_pays = flag; return old_flag; }
static bool IsRequesterPays() { return S3fsCurl::requester_pays; }
static bool SetIMDSVersion(int version);
// methods
bool CreateCurlHandle(bool only_pool = false, bool remake = false);
bool DestroyCurlHandle(bool restore_pool = true, bool clear_internal_data = true);
bool LoadIAMRoleFromMetaData();
bool GetIAMCredentials(const char* cred_url, const char* iam_v2_token, const char* ibm_secret_access_key, std::string& response);
bool GetIAMRoleFromMetaData(const char* cred_url, const char* iam_v2_token, std::string& token);
bool AddSseRequestHead(sse_type_t ssetype, const std::string& ssevalue, bool is_only_c, bool is_copy);
bool GetResponseCode(long& responseCode, bool from_curl_handle = true);
int RequestPerform(bool dontAddAuthHeaders=false);
int DeleteRequest(const char* tpath);
int GetIAMv2ApiToken(const char* token_url, int token_ttl, const char* token_ttl_hdr, std::string& response);
bool PreHeadRequest(const char* tpath, const char* bpath = NULL, const char* savedpath = NULL, size_t ssekey_pos = -1);
bool PreHeadRequest(const std::string& tpath, const std::string& bpath, const std::string& savedpath, size_t ssekey_pos = -1) {
return PreHeadRequest(tpath.c_str(), bpath.c_str(), savedpath.c_str(), ssekey_pos);
@ -399,7 +360,7 @@ class S3fsCurl
int MultipartListRequest(std::string& body);
int AbortMultipartUpload(const char* tpath, const std::string& upload_id);
int MultipartHeadRequest(const char* tpath, off_t size, headers_t& meta, bool is_copy);
int MultipartUploadRequest(const std::string& upload_id, const char* tpath, int fd, off_t offset, off_t size, int part_num, std::string* petag);
int MultipartUploadRequest(const std::string& upload_id, const char* tpath, int fd, off_t offset, off_t size, etagpair* petagpair);
int MultipartRenameRequest(const char* from, const char* to, headers_t& meta, off_t size);
// methods(variables)

View File

@ -27,6 +27,7 @@
#include "curl_util.h"
#include "string_util.h"
#include "s3fs_auth.h"
#include "s3fs_cred.h"
//-------------------------------------------------------------------
// Utility Functions
@ -244,7 +245,7 @@ bool MakeUrlResource(const char* realpath, std::string& resourcepath, std::strin
if(!realpath){
return false;
}
resourcepath = urlEncode(service_path + bucket + realpath);
resourcepath = urlEncode(service_path + S3fsCred::GetBucket() + realpath);
url = s3host + resourcepath;
return true;
}
@ -257,7 +258,7 @@ std::string prepare_url(const char* url)
std::string hostname;
std::string path;
std::string url_str = std::string(url);
std::string token = std::string("/") + bucket;
std::string token = std::string("/") + S3fsCred::GetBucket();
size_t bucket_pos;
size_t bucket_length = token.size();
size_t uri_length = 0;
@ -271,7 +272,7 @@ std::string prepare_url(const char* url)
bucket_pos = url_str.find(token, uri_length);
if(!pathrequeststyle){
hostname = bucket + "." + url_str.substr(uri_length, bucket_pos - uri_length);
hostname = S3fsCred::GetBucket() + "." + url_str.substr(uri_length, bucket_pos - uri_length);
path = url_str.substr((bucket_pos + bucket_length));
}else{
hostname = url_str.substr(uri_length, bucket_pos - uri_length);
@ -279,7 +280,7 @@ std::string prepare_url(const char* url)
if('/' != part[0]){
part = "/" + part;
}
path = "/" + bucket + part;
path = "/" + S3fsCred::GetBucket() + part;
}
url_str = uri + hostname + path;
@ -354,7 +355,7 @@ std::string url_to_host(const std::string &url)
std::string get_bucket_host()
{
if(!pathrequeststyle){
return bucket + "." + url_to_host(s3host);
return S3fsCred::GetBucket() + "." + url_to_host(s3host);
}
return url_to_host(s3host);
}

View File

@ -21,6 +21,7 @@
#include <cstdio>
#include <cstdlib>
#include <cerrno>
#include <climits>
#include <unistd.h>
#include <sys/types.h>
#include <dirent.h>
@ -31,6 +32,7 @@
#include "fdcache_pseudofd.h"
#include "s3fs_util.h"
#include "s3fs_logger.h"
#include "s3fs_cred.h"
#include "string_util.h"
#include "autolock.h"
@ -125,7 +127,7 @@ bool FdManager::DeleteCacheDirectory()
return false;
}
std::string mirror_path = FdManager::cache_dir + "/." + bucket + ".mirror";
std::string mirror_path = FdManager::cache_dir + "/." + S3fsCred::GetBucket() + ".mirror";
if(!delete_files_in_dir(mirror_path.c_str(), true)){
return false;
}
@ -181,10 +183,10 @@ bool FdManager::MakeCachePath(const char* path, std::string& cache_path, bool is
std::string resolved_path(FdManager::cache_dir);
if(!is_mirror_path){
resolved_path += "/";
resolved_path += bucket;
resolved_path += S3fsCred::GetBucket();
}else{
resolved_path += "/.";
resolved_path += bucket;
resolved_path += S3fsCred::GetBucket();
resolved_path += ".mirror";
}
@ -208,7 +210,7 @@ bool FdManager::CheckCacheTopDir()
if(FdManager::cache_dir.empty()){
return true;
}
std::string toppath(FdManager::cache_dir + "/" + bucket);
std::string toppath(FdManager::cache_dir + "/" + S3fsCred::GetBucket());
return check_exist_dir_permission(toppath.c_str());
}
@ -336,7 +338,7 @@ bool FdManager::HaveLseekHole()
result = false;
}
}
close(fd);
fclose(ptmpfp);
FdManager::checked_lseek = true;
FdManager::have_lseek_hole = result;
@ -407,6 +409,16 @@ bool FdManager::HasOpenEntityFd(const char* path)
return (0 < ent->GetOpenCount());
}
// [NOTE]
// Returns the number of open pseudo fd.
//
int FdManager::GetOpenFdCount(const char* path)
{
AutoLock auto_lock(&FdManager::fd_manager_lock);
return FdManager::singleton.GetPseudoFdCount(path);
}
//------------------------------------------------
// FdManager methods
//------------------------------------------------
@ -521,9 +533,9 @@ FdEntity* FdManager::GetFdEntity(const char* path, int& existfd, bool newfd, boo
return NULL;
}
FdEntity* FdManager::Open(int& fd, const char* path, headers_t* pmeta, off_t size, time_t time, int flags, bool force_tmpfile, bool is_create, AutoLock::Type type)
FdEntity* FdManager::Open(int& fd, const char* path, headers_t* pmeta, off_t size, time_t time, int flags, bool force_tmpfile, bool is_create, bool ignore_modify, AutoLock::Type type)
{
S3FS_PRN_DBG("[path=%s][size=%lld][time=%lld][flags=0x%x]", SAFESTRPTR(path), static_cast<long long>(size), static_cast<long long>(time), flags);
S3FS_PRN_DBG("[path=%s][size=%lld][time=%lld][flags=0x%x][force_tmpfile=%s][create=%s][ignore_modify=%s]", SAFESTRPTR(path), static_cast<long long>(size), static_cast<long long>(time), flags, (force_tmpfile ? "yes" : "no"), (is_create ? "yes" : "no"), (ignore_modify ? "yes" : "no"));
if(!path || '\0' == path[0]){
return NULL;
@ -551,7 +563,13 @@ FdEntity* FdManager::Open(int& fd, const char* path, headers_t* pmeta, off_t siz
// found
ent = iter->second;
if(ent->IsModified()){
// [NOTE]
// If the file is being modified and ignore_modify flag is false,
// the file size will not be changed even if there is a request
// to reduce the size of the modified file.
// If you do, the "test_open_second_fd" test will fail.
//
if(!ignore_modify && ent->IsModified()){
// If the file is being modified and it's size is larger than size parameter, it will not be resized.
off_t cur_size = 0;
if(ent->GetSize(cur_size) && size <= cur_size){
@ -628,7 +646,7 @@ FdEntity* FdManager::OpenExistFdEntity(const char* path, int& fd, int flags)
S3FS_PRN_DBG("[path=%s][flags=0x%x]", SAFESTRPTR(path), flags);
// search entity by path, and create pseudo fd
FdEntity* ent = Open(fd, path, NULL, -1, -1, flags, false, false, AutoLock::NONE);
FdEntity* ent = Open(fd, path, NULL, -1, -1, flags, false, false, false, AutoLock::NONE);
if(!ent){
// Not found entity
return NULL;
@ -636,6 +654,29 @@ FdEntity* FdManager::OpenExistFdEntity(const char* path, int& fd, int flags)
return ent;
}
// [NOTE]
// Returns the number of open pseudo fd.
// This method is called from GetOpenFdCount method which is already locked.
//
int FdManager::GetPseudoFdCount(const char* path)
{
S3FS_PRN_DBG("[path=%s]", SAFESTRPTR(path));
if(!path || '\0' == path[0]){
return 0;
}
// search from all entity.
for(fdent_map_t::iterator iter = fent.begin(); iter != fent.end(); ++iter){
if(iter->second && 0 == strcmp(iter->second->GetPath(), path)){
// found the entity for the path
return iter->second->GetOpenCount();
}
}
// not found entity
return 0;
}
void FdManager::Rename(const std::string &from, const std::string &to)
{
AutoLock auto_lock(&FdManager::fd_manager_lock);
@ -749,7 +790,7 @@ void FdManager::CleanupCacheDirInternal(const std::string &path)
{
DIR* dp;
struct dirent* dent;
std::string abs_path = cache_dir + "/" + bucket + path;
std::string abs_path = cache_dir + "/" + S3fsCred::GetBucket() + path;
if(NULL == (dp = opendir(abs_path.c_str()))){
S3FS_PRN_ERR("could not open cache dir(%s) - errno(%d)", abs_path.c_str(), errno);

View File

@ -47,9 +47,11 @@ class FdManager
private:
static off_t GetFreeDiskSpace(const char* path);
static bool IsDir(const std::string* dir);
int GetPseudoFdCount(const char* path);
void CleanupCacheDirInternal(const std::string &path = "");
bool RawCheckAllCache(FILE* fp, const char* cache_stat_top_dir, const char* sub_path, int& total_file_cnt, int& err_file_cnt, int& err_dir_cnt);
static bool IsDir(const std::string* dir);
public:
FdManager();
@ -71,6 +73,7 @@ class FdManager
static bool SetCheckCacheDirExist(bool is_check);
static bool CheckCacheDirExist();
static bool HasOpenEntityFd(const char* path);
static int GetOpenFdCount(const char* path);
static off_t GetEnsureFreeDiskSpace();
static off_t SetEnsureFreeDiskSpace(off_t size);
static bool InitFakeUsedDiskSize(off_t fake_freesize);
@ -84,7 +87,7 @@ class FdManager
// Return FdEntity associated with path, returning NULL on error. This operation increments the reference count; callers must decrement via Close after use.
FdEntity* GetFdEntity(const char* path, int& existfd, bool newfd = true, bool lock_already_held = false);
FdEntity* Open(int& fd, const char* path, headers_t* pmeta, off_t size, time_t time, int flags, bool force_tmpfile, bool is_create, AutoLock::Type type);
FdEntity* Open(int& fd, const char* path, headers_t* pmeta, off_t size, time_t time, int flags, bool force_tmpfile, bool is_create, bool ignore_modify, AutoLock::Type type);
FdEntity* GetExistFdEntity(const char* path, int existfd = -1);
FdEntity* OpenExistFdEntity(const char* path, int& fd, int flags = O_RDONLY);
void Rename(const std::string &from, const std::string &to);

View File

@ -98,11 +98,11 @@ bool AutoFdEntity::Attach(const char* path, int existfd)
return true;
}
FdEntity* AutoFdEntity::Open(const char* path, headers_t* pmeta, off_t size, time_t time, int flags, bool force_tmpfile, bool is_create, AutoLock::Type type)
FdEntity* AutoFdEntity::Open(const char* path, headers_t* pmeta, off_t size, time_t time, int flags, bool force_tmpfile, bool is_create, bool ignore_modify, AutoLock::Type type)
{
Close();
if(NULL == (pFdEntity = FdManager::get()->Open(pseudo_fd, path, pmeta, size, time, flags, force_tmpfile, is_create, type))){
if(NULL == (pFdEntity = FdManager::get()->Open(pseudo_fd, path, pmeta, size, time, flags, force_tmpfile, is_create, ignore_modify, type))){
pseudo_fd = -1;
return NULL;
}

View File

@ -51,7 +51,7 @@ class AutoFdEntity
bool Attach(const char* path, int existfd);
int GetPseudoFd() const { return pseudo_fd; }
FdEntity* Open(const char* path, headers_t* pmeta, off_t size, time_t time, int flags, bool force_tmpfile, bool is_create, AutoLock::Type type);
FdEntity* Open(const char* path, headers_t* pmeta, off_t size, time_t time, int flags, bool force_tmpfile, bool is_create, bool ignore_modify, AutoLock::Type type);
FdEntity* GetExistFdEntity(const char* path, int existfd = -1);
FdEntity* OpenExistFdEntity(const char* path, int flags = O_RDONLY);
};

View File

@ -404,7 +404,7 @@ bool FdEntity::IsUploading(bool lock_already_held)
// If the open is successful, returns pseudo fd.
// If it fails, it returns an error code with a negative value.
//
int FdEntity::Open(headers_t* pmeta, off_t size, time_t time, int flags, AutoLock::Type type)
int FdEntity::Open(const headers_t* pmeta, off_t size, time_t time, int flags, AutoLock::Type type)
{
AutoLock auto_lock(&fdent_lock, type);
@ -426,7 +426,7 @@ int FdEntity::Open(headers_t* pmeta, off_t size, time_t time, int flags, AutoLoc
// check only file size(do not need to save cfs and time.
if(0 <= size && pagelist.Size() != size){
// truncate temporary file size
if(-1 == ftruncate(physical_fd, size)){
if(-1 == ftruncate(physical_fd, size) || -1 == fsync(physical_fd)){
S3FS_PRN_ERR("failed to truncate temporary file(physical_fd=%d) by errno(%d).", physical_fd, errno);
return -errno;
}
@ -440,7 +440,7 @@ int FdEntity::Open(headers_t* pmeta, off_t size, time_t time, int flags, AutoLoc
off_t new_size = (0 <= size ? size : size_orgmeta);
if(pmeta){
orgmeta = *pmeta;
new_size = get_size(orgmeta);
size_orgmeta = get_size(orgmeta);
}
if(new_size < size_orgmeta){
size_orgmeta = new_size;
@ -793,10 +793,12 @@ int FdEntity::SetMCtime(struct timespec mtime, struct timespec ctime, bool lock_
}
}else if(!cachepath.empty()){
// not opened file yet.
struct utimbuf n_time;
n_time.modtime = mtime.tv_sec;
n_time.actime = ctime.tv_sec;
if(-1 == utime(cachepath.c_str(), &n_time)){
struct timeval n_time[2];
n_time[0].tv_sec = ctime.tv_sec;
n_time[0].tv_usec = ctime.tv_nsec / 1000;
n_time[1].tv_sec = mtime.tv_sec;
n_time[1].tv_usec = mtime.tv_nsec / 1000;
if(-1 == utimes(cachepath.c_str(), n_time)){
S3FS_PRN_ERR("utime failed. errno(%d)", errno);
return -errno;
}
@ -884,18 +886,30 @@ bool FdEntity::ClearHoldingMtime(bool lock_already_held)
struct timeval tv[2];
tv[0].tv_sec = holding_mtime.tv_sec;
tv[0].tv_usec = holding_mtime.tv_nsec / 1000;
#if defined(__APPLE__)
tv[1].tv_sec = st.st_ctime;
tv[1].tv_usec = 0;
tv[1].tv_usec = st.st_ctimespec.tv_nsec / 1000;
#else
tv[1].tv_sec = st.st_ctim.tv_sec;
tv[1].tv_usec = st.st_ctim.tv_nsec / 1000;
#endif
if(-1 == futimes(physical_fd, tv)){
S3FS_PRN_ERR("futimes failed. errno(%d)", errno);
return false;
}
}else if(!cachepath.empty()){
// not opened file yet.
struct utimbuf n_time;
n_time.modtime = holding_mtime.tv_sec;
n_time.actime = st.st_ctime;
if(-1 == utime(cachepath.c_str(), &n_time)){
struct timeval n_time[2];
#if defined(__APPLE__)
n_time[0].tv_sec = st.st_ctime;
n_time[0].tv_usec = st.st_ctimespec.tv_nsec / 1000;
#else
n_time[0].tv_sec = st.st_ctime;
n_time[0].tv_usec = st.st_ctim.tv_nsec / 1000;
#endif
n_time[1].tv_sec = holding_mtime.tv_sec;
n_time[1].tv_usec = holding_mtime.tv_nsec / 1000;
if(-1 == utimes(cachepath.c_str(), n_time)){
S3FS_PRN_ERR("utime failed. errno(%d)", errno);
return false;
}
@ -1240,6 +1254,9 @@ int FdEntity::NoCachePreMultipartPost(PseudoFdInfo* pseudo_obj)
}
s3fscurl.DestroyCurlHandle();
// Clear the dirty flag, because the meta data is updated.
is_meta_pending = false;
// reset upload_id
if(!pseudo_obj->InitialUploadInfo(upload_id)){
return -EIO;
@ -1265,14 +1282,13 @@ int FdEntity::NoCacheMultipartPost(PseudoFdInfo* pseudo_obj, int tgfd, off_t sta
}
// append new part and get it's etag string pointer
int partnum = 0;
std::string* petag = NULL;
if(!pseudo_obj->AppendUploadPart(start, size, false, &partnum, &petag)){
etagpair* petagpair = NULL;
if(!pseudo_obj->AppendUploadPart(start, size, false, &petagpair)){
return -EIO;
}
S3fsCurl s3fscurl(true);
return s3fscurl.MultipartUploadRequest(upload_id, path.c_str(), tgfd, start, size, partnum, petag);
return s3fscurl.MultipartUploadRequest(upload_id, path.c_str(), tgfd, start, size, petagpair);
}
// [NOTE]
@ -1341,7 +1357,7 @@ int FdEntity::RowFlush(int fd, const char* tpath, bool force_sync)
if(pseudo_fd_map.end() == miter || NULL == miter->second){
return -EBADF;
}
if(!miter->second->Writable()){
if(!miter->second->Writable() && !(miter->second->GetFlags() & O_CREAT)){
// If the entity is opened read-only, it will end normally without updating.
return 0;
}
@ -1511,7 +1527,7 @@ int FdEntity::RowFlushMultipart(PseudoFdInfo* pseudo_obj, const char* tpath)
// upload rest data
off_t untreated_start = 0;
off_t untreated_size = 0;
if(pseudo_obj->GetLastUntreated(untreated_start, untreated_size, S3fsCurl::GetMultipartSize()) && 0 < untreated_size){
if(pseudo_obj->GetLastUntreated(untreated_start, untreated_size, S3fsCurl::GetMultipartSize(), 0) && 0 < untreated_size){
if(0 != (result = NoCacheMultipartPost(pseudo_obj, physical_fd, untreated_start, untreated_size))){
S3FS_PRN_ERR("failed to multipart post(start=%lld, size=%lld) for file(physical_fd=%d).", static_cast<long long int>(untreated_start), static_cast<long long int>(untreated_size), physical_fd);
return result;
@ -1536,6 +1552,7 @@ int FdEntity::RowFlushMultipart(PseudoFdInfo* pseudo_obj, const char* tpath)
if(0 == result){
pagelist.ClearAllModified();
is_meta_pending = false;
}
return result;
}
@ -1638,7 +1655,7 @@ int FdEntity::RowFlushMixMultipart(PseudoFdInfo* pseudo_obj, const char* tpath)
// upload rest data
off_t untreated_start = 0;
off_t untreated_size = 0;
if(pseudo_obj->GetLastUntreated(untreated_start, untreated_size, S3fsCurl::GetMultipartSize()) && 0 < untreated_size){
if(pseudo_obj->GetLastUntreated(untreated_start, untreated_size, S3fsCurl::GetMultipartSize(), 0) && 0 < untreated_size){
if(0 != (result = NoCacheMultipartPost(pseudo_obj, physical_fd, untreated_start, untreated_size))){
S3FS_PRN_ERR("failed to multipart post(start=%lld, size=%lld) for file(physical_fd=%d).", static_cast<long long int>(untreated_start), static_cast<long long int>(untreated_size), physical_fd);
return result;
@ -1663,6 +1680,7 @@ int FdEntity::RowFlushMixMultipart(PseudoFdInfo* pseudo_obj, const char* tpath)
if(0 == result){
pagelist.ClearAllModified();
is_meta_pending = false;
}
return result;
}
@ -2081,14 +2099,13 @@ int put_headers(const char* path, headers_t& meta, bool is_copy, bool use_st_siz
int FdEntity::UploadPendingMeta()
{
AutoLock auto_lock(&fdent_lock);
if(!is_meta_pending) {
return 0;
}
headers_t updatemeta = orgmeta;
updatemeta["x-amz-copy-source"] = urlEncode(service_path + bucket + get_realpath(path.c_str()));
updatemeta["x-amz-copy-source"] = urlEncode(service_path + S3fsCred::GetBucket() + get_realpath(path.c_str()));
updatemeta["x-amz-metadata-directive"] = "REPLACE";
// put headers, no need to update mtime to avoid dead lock
int result = put_headers(path.c_str(), updatemeta, true);
if(0 != result){
@ -2143,7 +2160,7 @@ bool FdEntity::PunchHole(off_t start, size_t size)
// get page list that have no data
fdpage_list_t nodata_pages;
if(!pagelist.GetNoDataPageLists(nodata_pages)){
S3FS_PRN_ERR("filed to get page list that have no data.");
S3FS_PRN_ERR("failed to get page list that have no data.");
return false;
}
if(nodata_pages.empty()){
@ -2155,9 +2172,9 @@ bool FdEntity::PunchHole(off_t start, size_t size)
for(fdpage_list_t::const_iterator iter = nodata_pages.begin(); iter != nodata_pages.end(); ++iter){
if(0 != fallocate(physical_fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, iter->offset, iter->bytes)){
if(ENOSYS == errno || EOPNOTSUPP == errno){
S3FS_PRN_ERR("filed to fallocate for punching hole to file with errno(%d), it maybe the fallocate function is not implemented in this kernel, or the file system does not support FALLOC_FL_PUNCH_HOLE.", errno);
S3FS_PRN_ERR("failed to fallocate for punching hole to file with errno(%d), it maybe the fallocate function is not implemented in this kernel, or the file system does not support FALLOC_FL_PUNCH_HOLE.", errno);
}else{
S3FS_PRN_ERR("filed to fallocate for punching hole to file with errno(%d)", errno);
S3FS_PRN_ERR("failed to fallocate for punching hole to file with errno(%d)", errno);
}
return false;
}
@ -2170,6 +2187,16 @@ bool FdEntity::PunchHole(off_t start, size_t size)
return true;
}
// [NOTE]
// Indicate that a new file's is dirty.
// This ensures that both metadata and data are synced during flush.
//
void FdEntity::MarkDirtyNewFile()
{
pagelist.Init(0, false, true);
is_meta_pending = true;
}
/*
* Local variables:
* tab-width: 4

View File

@ -85,7 +85,7 @@ class FdEntity
void Close(int fd);
bool IsOpen() const { return (-1 != physical_fd); }
bool FindPseudoFd(int fd, bool lock_already_held = false);
int Open(headers_t* pmeta, off_t size, time_t time, int flags, AutoLock::Type type);
int Open(const headers_t* pmeta, off_t size, time_t time, int flags, AutoLock::Type type);
bool LoadAll(int fd, headers_t* pmeta = NULL, off_t* size = NULL, bool force_load = false);
int Dup(int fd, bool lock_already_held = false);
int OpenPseudoFd(int flags = O_RDONLY, bool lock_already_held = false);
@ -126,11 +126,7 @@ class FdEntity
bool ReserveDiskSpace(off_t size);
bool PunchHole(off_t start = 0, size_t size = 0);
// Indicate that a new file's is dirty. This ensures that both metadata and data are synced during flush.
void MarkDirtyNewFile() {
pagelist.SetPageLoadedStatus(0, 1, PageList::PAGE_LOAD_MODIFIED);
is_meta_pending = true;
}
void MarkDirtyNewFile();
};
typedef std::map<std::string, class FdEntity*> fdent_map_t; // key=path, value=FdEntity*

View File

@ -141,7 +141,7 @@ bool PseudoFdInfo::InitialUploadInfo(const std::string& id)
bool PseudoFdInfo::GetUploadId(std::string& id) const
{
if(IsUploading()){
if(!IsUploading()){
S3FS_PRN_ERR("Multipart Upload has not started yet.");
return false;
}
@ -151,7 +151,7 @@ bool PseudoFdInfo::GetUploadId(std::string& id) const
bool PseudoFdInfo::GetEtaglist(etaglist_t& list)
{
if(IsUploading()){
if(!IsUploading()){
S3FS_PRN_ERR("Multipart Upload has not started yet.");
return false;
}
@ -177,9 +177,9 @@ bool PseudoFdInfo::GetEtaglist(etaglist_t& list)
// An error will occur if it is discontinuous or if it overlaps with an
// existing area.
//
bool PseudoFdInfo::AppendUploadPart(off_t start, off_t size, bool is_copy, int* ppartnum, std::string** ppetag)
bool PseudoFdInfo::AppendUploadPart(off_t start, off_t size, bool is_copy, etagpair** ppetag)
{
if(IsUploading()){
if(!IsUploading()){
S3FS_PRN_ERR("Multipart Upload has not started yet.");
return false;
}
@ -194,21 +194,20 @@ bool PseudoFdInfo::AppendUploadPart(off_t start, off_t size, bool is_copy, int*
return false;
}
// add new part
etag_entities.push_back(std::string("")); // [NOTE] Create the etag entity and register it in the list.
std::string& etag_entity = etag_entities.back();
filepart newpart(false, physical_fd, start, size, is_copy, &etag_entity);
upload_list.push_back(newpart);
// make part number
int partnumber = static_cast<int>(upload_list.size()) + 1;
// set part number
if(ppartnum){
*ppartnum = static_cast<int>(upload_list.size());
}
// add new part
etag_entities.push_back(etagpair(NULL, partnumber)); // [NOTE] Create the etag entity and register it in the list.
etagpair& etag_entity = etag_entities.back();
filepart newpart(false, physical_fd, start, size, is_copy, &etag_entity);
upload_list.push_back(newpart);
// set etag pointer
if(ppetag){
*ppetag = &etag_entity;
}
return true;
}

View File

@ -35,7 +35,7 @@ class PseudoFdInfo
std::string upload_id;
filepart_list_t upload_list;
UntreatedParts untreated_list; // list of untreated parts that have been written and not yet uploaded(for streamupload)
etaglist_t etag_entities; // list of etag string entities(to maintain the etag entity even if MPPART_INFO is destroyed)
etaglist_t etag_entities; // list of etag string and part number entities(to maintain the etag entity even if MPPART_INFO is destroyed)
bool is_lock_init;
pthread_mutex_t upload_list_lock; // protects upload_id and upload_list
@ -61,7 +61,7 @@ class PseudoFdInfo
bool GetUploadId(std::string& id) const;
bool GetEtaglist(etaglist_t& list);
bool AppendUploadPart(off_t start, off_t size, bool is_copy = false, int* ppartnum = NULL, std::string** ppetag = NULL);
bool AppendUploadPart(off_t start, off_t size, bool is_copy = false, etagpair** ppetag = NULL);
void ClearUntreated(bool lock_already_held = false);
bool ClearUntreated(off_t start, off_t size);

View File

@ -62,6 +62,8 @@ inline void raw_add_compress_fdpage_list(fdpage_list_t& pagelist, fdpage& page,
// default_modify: modified flag value in the list after compression when default_modify=true
//
// NOTE: ignore_modify and ignore_load cannot both be true.
// Zero size pages will be deleted. However, if the page information is the only one,
// it will be left behind. This is what you need to do to create a new empty file.
//
static fdpage_list_t raw_compress_fdpage_list(const fdpage_list_t& pages, bool ignore_load, bool ignore_modify, bool default_load, bool default_modify)
{
@ -70,28 +72,33 @@ static fdpage_list_t raw_compress_fdpage_list(const fdpage_list_t& pages, bool i
bool is_first = true;
for(fdpage_list_t::const_iterator iter = pages.begin(); iter != pages.end(); ++iter){
if(!is_first){
if( (!ignore_load && (tmppage.loaded != iter->loaded )) ||
(!ignore_modify && (tmppage.modified != iter->modified)) )
{
// Different from the previous area, add it to list
raw_add_compress_fdpage_list(compressed_pages, tmppage, ignore_load, ignore_modify, default_load, default_modify);
// keep current area
tmppage = fdpage(iter->offset, iter->bytes, (ignore_load ? default_load : iter->loaded), (ignore_modify ? default_modify : iter->modified));
}else{
// Same as the previous area
if(tmppage.next() != iter->offset){
// These are not contiguous areas, add it to list
if(0 < tmppage.bytes){
if( (!ignore_load && (tmppage.loaded != iter->loaded )) ||
(!ignore_modify && (tmppage.modified != iter->modified)) )
{
// Different from the previous area, add it to list
raw_add_compress_fdpage_list(compressed_pages, tmppage, ignore_load, ignore_modify, default_load, default_modify);
// keep current area
tmppage = fdpage(iter->offset, iter->bytes, (ignore_load ? default_load : iter->loaded), (ignore_modify ? default_modify : iter->modified));
}else{
// These are contiguous areas
// Same as the previous area
if(tmppage.next() != iter->offset){
// These are not contiguous areas, add it to list
raw_add_compress_fdpage_list(compressed_pages, tmppage, ignore_load, ignore_modify, default_load, default_modify);
// add current area
tmppage.bytes += iter->bytes;
// keep current area
tmppage = fdpage(iter->offset, iter->bytes, (ignore_load ? default_load : iter->loaded), (ignore_modify ? default_modify : iter->modified));
}else{
// These are contiguous areas
// add current area
tmppage.bytes += iter->bytes;
}
}
}else{
// if found empty page, skip it
tmppage = fdpage(iter->offset, iter->bytes, (ignore_load ? default_load : iter->loaded), (ignore_modify ? default_modify : iter->modified));
}
}else{
// first erea
@ -103,7 +110,13 @@ static fdpage_list_t raw_compress_fdpage_list(const fdpage_list_t& pages, bool i
}
// add last area
if(!is_first){
raw_add_compress_fdpage_list(compressed_pages, tmppage, ignore_load, ignore_modify, default_load, default_modify);
// [NOTE]
// Zero size pages are not allowed. However, if it is the only one, allow it.
// This is a special process that exists only to create empty files.
//
if(compressed_pages.empty() || 0 != tmppage.bytes){
raw_add_compress_fdpage_list(compressed_pages, tmppage, ignore_load, ignore_modify, default_load, default_modify);
}
}
return compressed_pages;
}
@ -293,7 +306,7 @@ bool PageList::CheckAreaInSparseFile(const struct fdpage& checkpage, const fdpag
check_start = checkpage.offset;
check_bytes = iter->bytes - (checkpage.offset - iter->offset);
}else if(iter->offset < (checkpage.offset + checkpage.bytes) && (checkpage.offset + checkpage.bytes) < (iter->offset + iter->bytes)){
}else if((checkpage.offset + checkpage.bytes) < (iter->offset + iter->bytes)){ // here, already "iter->offset < (checkpage.offset + checkpage.bytes)" is true.
// case 3
check_start = iter->offset;
check_bytes = checkpage.bytes - (iter->offset - checkpage.offset);
@ -342,7 +355,7 @@ void PageList::FreeList(fdpage_list_t& list)
list.clear();
}
PageList::PageList(off_t size, bool is_loaded, bool is_modified)
PageList::PageList(off_t size, bool is_loaded, bool is_modified, bool shrinked) : is_shrink(shrinked)
{
Init(size, is_loaded, is_modified);
}
@ -352,6 +365,7 @@ PageList::PageList(const PageList& other)
for(fdpage_list_t::const_iterator iter = other.pages.begin(); iter != other.pages.end(); ++iter){
pages.push_back(*iter);
}
is_shrink = other.is_shrink;
}
PageList::~PageList()
@ -362,12 +376,13 @@ PageList::~PageList()
void PageList::Clear()
{
PageList::FreeList(pages);
is_shrink = false;
}
bool PageList::Init(off_t size, bool is_loaded, bool is_modified)
{
Clear();
if(0 < size){
if(0 <= size){
fdpage page(0, size, is_loaded, is_modified);
pages.push_back(page);
}
@ -431,6 +446,9 @@ bool PageList::Resize(off_t size, bool is_loaded, bool is_modified)
}
}
}
if(is_modified){
is_shrink = true;
}
}else{ // total == size
// nothing to do
}
@ -753,6 +771,9 @@ off_t PageList::BytesModified() const
bool PageList::IsModified() const
{
if(is_shrink){
return true;
}
for(fdpage_list_t::const_iterator iter = pages.begin(); iter != pages.end(); ++iter){
if(iter->modified){
return true;
@ -763,6 +784,8 @@ bool PageList::IsModified() const
bool PageList::ClearAllModified()
{
is_shrink = false;
for(fdpage_list_t::iterator iter = pages.begin(); iter != pages.end(); ++iter){
if(iter->modified){
iter->modified = false;
@ -926,7 +949,7 @@ void PageList::Dump() const
{
int cnt = 0;
S3FS_PRN_DBG("pages = {");
S3FS_PRN_DBG("pages (shrinked=%s) = {", (is_shrink ? "yes" : "no"));
for(fdpage_list_t::const_iterator iter = pages.begin(); iter != pages.end(); ++iter, ++cnt){
S3FS_PRN_DBG(" [%08d] -> {%014lld - %014lld : %s / %s}", cnt, static_cast<long long int>(iter->offset), static_cast<long long int>(iter->bytes), iter->loaded ? "loaded" : "unloaded", iter->modified ? "modified" : "not modified");
}

View File

@ -77,6 +77,7 @@ class PageList
private:
fdpage_list_t pages;
bool is_shrink; // [NOTE] true if it has been shrinked even once
public:
enum page_status{
@ -97,7 +98,7 @@ class PageList
public:
static void FreeList(fdpage_list_t& list);
explicit PageList(off_t size = 0, bool is_loaded = false, bool is_modified = false);
explicit PageList(off_t size = 0, bool is_loaded = false, bool is_modified = false, bool shrinked = false);
explicit PageList(const PageList& other);
~PageList();

View File

@ -29,6 +29,7 @@
#include "fdcache_stat.h"
#include "fdcache.h"
#include "s3fs_util.h"
#include "s3fs_cred.h"
#include "string_util.h"
//------------------------------------------------
@ -37,14 +38,14 @@
std::string CacheFileStat::GetCacheFileStatTopDir()
{
std::string top_path;
if(!FdManager::IsCacheDir() || bucket.empty()){
if(!FdManager::IsCacheDir() || S3fsCred::GetBucket().empty()){
return top_path;
}
// stat top dir( "/<cache_dir>/.<bucket_name>.stat" )
top_path += FdManager::GetCacheDir();
top_path += "/.";
top_path += bucket;
top_path += S3fsCred::GetBucket();
top_path += ".stat";
return top_path;
}

File diff suppressed because it is too large Load Diff

1284
src/s3fs_cred.cpp Normal file

File diff suppressed because it is too large Load Diff

161
src/s3fs_cred.h Normal file
View File

@ -0,0 +1,161 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright(C) 2007 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef S3FS_CRED_H_
#define S3FS_CRED_H_
#include "autolock.h"
//----------------------------------------------
// Typedefs
//----------------------------------------------
typedef std::map<std::string, std::string> iamcredmap_t;
//------------------------------------------------
// class S3fsCred
//------------------------------------------------
// This is a class for operating and managing Credentials(accesskey,
// secret key, tokens, etc.) used by S3fs.
// Operations related to Credentials are aggregated in this class.
//
// cppcheck-suppress ctuOneDefinitionRuleViolation ; for stub in test_curl_util.cpp
class S3fsCred
{
private:
static const char* ALLBUCKET_FIELDS_TYPE; // special key for mapping(This name is absolutely not used as a bucket name)
static const char* KEYVAL_FIELDS_TYPE; // special key for mapping(This name is absolutely not used as a bucket name)
static const char* AWS_ACCESSKEYID;
static const char* AWS_SECRETKEY;
static const int IAM_EXPIRE_MERGIN;
static const char* ECS_IAM_ENV_VAR;
static const char* IAMCRED_ACCESSKEYID;
static const char* IAMCRED_SECRETACCESSKEY;
static const char* IAMCRED_ROLEARN;
static std::string bucket_name;
pthread_mutex_t token_lock;
bool is_lock_init;
std::string passwd_file;
std::string aws_profile;
bool load_iamrole;
std::string AWSAccessKeyId; // Protect exclusively
std::string AWSSecretAccessKey; // Protect exclusively
std::string AWSAccessToken; // Protect exclusively
time_t AWSAccessTokenExpire; // Protect exclusively
bool is_ecs;
bool is_use_session_token;
bool is_ibm_iam_auth;
std::string IAM_cred_url;
int IAM_api_version; // Protect exclusively
std::string IAMv2_api_token; // Protect exclusively
size_t IAM_field_count;
std::string IAM_token_field;
std::string IAM_expiry_field;
std::string IAM_role; // Protect exclusively
public:
static const char* IAMv2_token_url;
static int IAMv2_token_ttl;
static const char* IAMv2_token_ttl_hdr;
static const char* IAMv2_token_hdr;
private:
static bool ParseIAMRoleFromMetaDataResponse(const char* response, std::string& rolename);
bool SetS3fsPasswdFile(const char* file);
bool IsSetPasswdFile();
bool SetAwsProfileName(const char* profile_name);
bool SetIAMRoleMetadataType(bool flag);
bool SetAccessKey(const char* AccessKeyId, const char* SecretAccessKey, AutoLock::Type type);
bool SetAccessKeyWithSessionToken(const char* AccessKeyId, const char* SecretAccessKey, const char * SessionToken, AutoLock::Type type);
bool IsSetAccessKeys(AutoLock::Type type);
bool SetIsECS(bool flag);
bool SetIsUseSessionToken(bool flag);
bool SetIsIBMIAMAuth(bool flag);
int SetIMDSVersion(int version, AutoLock::Type type);
int GetIMDSVersion(AutoLock::Type type);
bool SetIAMv2APIToken(const std::string& token, AutoLock::Type type);
std::string GetIAMv2APIToken(AutoLock::Type type);
bool SetIAMRole(const char* role, AutoLock::Type type);
std::string GetIAMRole(AutoLock::Type type);
bool IsSetIAMRole(AutoLock::Type type);
size_t SetIAMFieldCount(size_t field_count);
std::string SetIAMCredentialsURL(const char* url);
std::string SetIAMTokenField(const char* token_field);
std::string SetIAMExpiryField(const char* expiry_field);
bool IsReadableS3fsPasswdFile();
bool CheckS3fsPasswdFilePerms();
bool ParseS3fsPasswdFile(bucketkvmap_t& resmap);
bool ReadS3fsPasswdFile(AutoLock::Type type);
int CheckS3fsCredentialAwsFormat(const kvmap_t& kvmap, std::string& access_key_id, std::string& secret_access_key);
bool ReadAwsCredentialFile(const std::string &filename, AutoLock::Type type);
bool InitialS3fsCredentials();
bool ParseIAMCredentialResponse(const char* response, iamcredmap_t& keyval);
bool GetIAMCredentialsURL(std::string& url, bool check_iam_role, AutoLock::Type type);
bool LoadIAMCredentials(AutoLock::Type type);
bool SetIAMCredentials(const char* response, AutoLock::Type type);
bool SetIAMRoleFromMetaData(const char* response, AutoLock::Type type);
bool CheckForbiddenBucketParams();
public:
static bool SetBucket(const char* bucket);
static const std::string& GetBucket();
S3fsCred();
~S3fsCred();
bool IsIBMIAMAuth() const { return is_ibm_iam_auth; }
bool LoadIAMRoleFromMetaData();
bool CheckIAMCredentialUpdate(std::string* access_key_id = NULL, std::string* secret_access_key = NULL, std::string* access_token = NULL);
int DetectParam(const char* arg);
bool CheckAllParams();
};
#endif // S3FS_CRED_H_
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: expandtab sw=4 ts=4 fdm=marker
* vim<600: expandtab sw=4 ts=4
*/

View File

@ -32,11 +32,9 @@ bool noxmlns = false;
std::string program_name;
std::string service_path = "/";
std::string s3host = "https://s3.amazonaws.com";
std::string bucket;
std::string endpoint = "us-east-1";
std::string cipher_suites;
std::string instance_name;
std::string aws_profile = "default";
/*
* Local variables:

View File

@ -10,7 +10,7 @@
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
@ -63,9 +63,9 @@ static const char help_string[] =
"\n"
" default_acl (default=\"private\")\n"
" - the default canned acl to apply to all written s3 objects,\n"
" e.g., private, public-read. see\n"
" e.g., private, public-read. see\n"
" https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl\n"
" for the full list of canned acls\n"
" for the full list of canned ACLs\n"
"\n"
" retries (default=\"5\")\n"
" - number of times to retry a failed S3 transaction\n"
@ -85,7 +85,7 @@ static const char help_string[] =
" - delete local file cache when s3fs starts and exits.\n"
"\n"
" storage_class (default=\"standard\")\n"
" - store object with specified storage class. Possible values:\n"
" - store object with specified storage class. Possible values:\n"
" standard, standard_ia, onezone_ia, reduced_redundancy,\n"
" intelligent_tiering, glacier, and deep_archive.\n"
"\n"
@ -252,20 +252,20 @@ static const char help_string[] =
"\n"
" max_dirty_data (default=\"5120\")\n"
" - flush dirty data to S3 after a certain number of MB written.\n"
" The minimum value is 50 MB. -1 value means disable.\n"
" The minimum value is 50 MB. -1 value means disable.\n"
" Cannot be used with nomixupload.\n"
"\n"
" ensure_diskfree (default 0)\n"
" - sets MB to ensure disk free space. This option means the\n"
" - sets MB to ensure disk free space. This option means the\n"
" threshold of free space size on disk which is used for the\n"
" cache file by s3fs. s3fs makes file for\n"
" cache file by s3fs. s3fs makes file for\n"
" downloading, uploading and caching files. If the disk free\n"
" space is smaller than this value, s3fs do not use diskspace\n"
" space is smaller than this value, s3fs do not use disk space\n"
" as possible in exchange for the performance.\n"
"\n"
" multipart_threshold (default=\"25\")\n"
" - threshold, in MB, to use multipart upload instead of\n"
" single-part. Must be at least 5 MB.\n"
" single-part. Must be at least 5 MB.\n"
"\n"
" singlepart_copy_limit (default=\"512\")\n"
" - maximum size, in MB, of a single-part copy before trying \n"
@ -308,15 +308,19 @@ static const char help_string[] =
" mount point by this option like umask.\n"
"\n"
" umask (default is \"0000\")\n"
" - sets umask for files under the mountpoint. This can allow\n"
" - sets umask for files under the mountpoint. This can allow\n"
" users other than the mounting user to read and write to files\n"
" that they did not create.\n"
"\n"
" nomultipart (disable multipart uploads)\n"
"\n"
" enable_content_md5 (default is disable)\n"
" Allow S3 server to check data integrity of uploads via the\n"
" Content-MD5 header. This can add CPU overhead to transfers.\n"
" - Allow S3 server to check data integrity of uploads via the\n"
" Content-MD5 header. This can add CPU overhead to transfers.\n"
"\n"
" enable_unsigned_payload (default is disable)\n"
" - Do not calculate Content-SHA256 for PutObject and UploadPart\n"
" payloads. This can reduce CPU overhead to transfers.\n"
"\n"
" ecs (default is disable)\n"
" - This option instructs s3fs to query the ECS container credential\n"
@ -330,7 +334,7 @@ static const char help_string[] =
"\n"
" imdsv1only (default is to use IMDSv2 with fallback to v1)\n"
" - AWS instance metadata service, used with IAM role authentication,\n"
" supports the use of an API token. If you're using an IAM role\n"
" supports the use of an API token. If you're using an IAM role\n"
" in an environment that does not support IMDSv2, setting this flag\n"
" will skip retrieval and usage of the API token when retrieving\n"
" IAM credentials.\n"
@ -365,15 +369,15 @@ static const char help_string[] =
" invalidated even if this option is not specified.\n"
"\n"
" nocopyapi (for other incomplete compatibility object storage)\n"
" For a distributed object storage which is compatibility S3\n"
" API without PUT (copy api).\n"
" Enable compatibility with S3-like APIs which do not support\n"
" PUT (copy api).\n"
" If you set this option, s3fs do not use PUT with \n"
" \"x-amz-copy-source\" (copy api). Because traffic is increased\n"
" 2-3 times by this option, we do not recommend this.\n"
"\n"
" norenameapi (for other incomplete compatibility object storage)\n"
" For a distributed object storage which is compatibility S3\n"
" API without PUT (copy api).\n"
" Enable compatibility with S3-like APIs which do not support\n"
" PUT (copy api).\n"
" This option is a subset of nocopyapi option. The nocopyapi\n"
" option does not use copy-api for all command (ex. chmod, chown,\n"
" touch, mv, etc), but this option does not use copy-api for\n"
@ -412,23 +416,23 @@ static const char help_string[] =
" for a object, then the object will not be able to be allowed to\n"
" list/modify.\n"
"\n"
" notsup_compat_dir (not support compatibility directory types)\n"
" As a default, s3fs supports objects of the directory type as\n"
" much as possible and recognizes them as directories.\n"
" Objects that can be recognized as directory objects are \"dir/\",\n"
" \"dir\", \"dir_$folder$\", and there is a file object that does\n"
" not have a directory object but contains that directory path.\n"
" s3fs needs redundant communication to support all these\n"
" directory types. The object as the directory created by s3fs\n"
" is \"dir/\". By restricting s3fs to recognize only \"dir/\" as\n"
" a directory, communication traffic can be reduced. This option\n"
" is used to give this restriction to s3fs.\n"
" However, if there is a directory object other than \"dir/\" in\n"
" the bucket, specifying this option is not recommended. s3fs may\n"
" not be able to recognize the object correctly if an object\n"
" created by s3fs exists in the bucket.\n"
" Please use this option when the directory in the bucket is\n"
" only \"dir/\" object.\n"
" notsup_compat_dir (disable support of alternative directory names)\n"
" s3fs supports the three different naming schemas \"dir/\",\n"
" \"dir\" and \"dir_$folder$\" to map directory names to S3\n"
" objects and vice versa. As a fourth variant, directories can be\n"
" determined indirectly if there is a file object with a path (e.g.\n"
" \"/dir/file\") but without the parent directory.\n"
" \n"
" S3fs uses only the first schema \"dir/\" to create S3 objects for\n"
" directories."
" \n"
" The support for these different naming schemas causes an increased\n"
" communication effort.\n"
" \n"
" If all applications exclusively use the \"dir/\" naming scheme and\n"
" the bucket does not contain any objects with a different naming \n"
" scheme, this option can be used to disable support for alternative\n"
" naming schemes. This reduces access time and can save costs.\nq"
"\n"
" use_wtf8 - support arbitrary file system encoding.\n"
" S3 requires all object names to be valid UTF-8. But some\n"
@ -470,7 +474,7 @@ static const char help_string[] =
" (error), warn (warning), info (information) to debug level.\n"
" default debug level is critical. If s3fs run with \"-d\" option,\n"
" the debug level is set information. When s3fs catch the signal\n"
" SIGUSR2, the debug level is bumpup.\n"
" SIGUSR2, the debug level is bump up.\n"
"\n"
" curldbg - put curl debug message\n"
" Put the debug message from libcurl when this option is specified.\n"
@ -499,7 +503,7 @@ static const char help_string[] =
"\n"
" Most of the generic mount options described in 'man mount' are\n"
" supported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime,\n"
" noatime, sync async, dirsync). Filesystems are mounted with\n"
" noatime, sync async, dirsync). Filesystems are mounted with\n"
" '-onodev,nosuid' by default, which can only be overridden by a\n"
" privileged user.\n"
" \n"

View File

@ -257,6 +257,57 @@ S3fsLog::s3fs_log_level S3fsLog::LowBumpupLogLevel()
return old;
}
void s3fs_low_logprn(S3fsLog::s3fs_log_level level, const char* file, const char *func, int line, const char *fmt, ...)
{
if(S3fsLog::IsS3fsLogLevel(level)){
va_list va;
va_start(va, fmt);
size_t len = vsnprintf(NULL, 0, fmt, va) + 1;
va_end(va);
char *message = new char[len];
va_start(va, fmt);
vsnprintf(message, len, fmt, va);
va_end(va);
if(foreground || S3fsLog::IsSetLogFile()){
S3fsLog::SeekEnd();
fprintf(S3fsLog::GetOutputLogFile(), "%s%s%s:%s(%d): %s\n", S3fsLog::GetCurrentTime().c_str(), S3fsLog::GetLevelString(level), file, func, line, message);
S3fsLog::Flush();
}else{
// TODO: why does this differ from s3fs_low_logprn2?
syslog(S3fsLog::GetSyslogLevel(level), "%s%s:%s(%d): %s", instance_name.c_str(), file, func, line, message);
}
delete[] message;
}
}
void s3fs_low_logprn2(S3fsLog::s3fs_log_level level, int nest, const char* file, const char *func, int line, const char *fmt, ...)
{
if(S3fsLog::IsS3fsLogLevel(level)){
va_list va;
va_start(va, fmt);
size_t len = vsnprintf(NULL, 0, fmt, va) + 1;
va_end(va);
char *message = new char[len];
va_start(va, fmt);
vsnprintf(message, len, fmt, va);
va_end(va);
if(foreground || S3fsLog::IsSetLogFile()){
S3fsLog::SeekEnd();
fprintf(S3fsLog::GetOutputLogFile(), "%s%s%s%s:%s(%d): %s\n", S3fsLog::GetCurrentTime().c_str(), S3fsLog::GetLevelString(level), S3fsLog::GetS3fsLogNest(nest), file, func, line, message);
S3fsLog::Flush();
}else{
syslog(S3fsLog::GetSyslogLevel(level), "%s%s%s", instance_name.c_str(), S3fsLog::GetS3fsLogNest(nest), message);
}
delete[] message;
}
}
/*
* Local variables:
* tab-width: 4

View File

@ -21,6 +21,7 @@
#ifndef S3FS_LOGGER_H_
#define S3FS_LOGGER_H_
#include <cstdarg>
#include <cstdio>
#include <syslog.h>
#include <sys/time.h>
@ -141,30 +142,16 @@ class S3fsLog
//-------------------------------------------------------------------
// Debug macros
//-------------------------------------------------------------------
void s3fs_low_logprn(S3fsLog::s3fs_log_level level, const char* file, const char *func, int line, const char *fmt, ...) __attribute__ ((format (printf, 5, 6)));
#define S3FS_LOW_LOGPRN(level, fmt, ...) \
do{ \
if(S3fsLog::IsS3fsLogLevel(level)){ \
if(foreground || S3fsLog::IsSetLogFile()){ \
S3fsLog::SeekEnd(); \
fprintf(S3fsLog::GetOutputLogFile(), "%s%s%s:%s(%d): " fmt "%s\n", S3fsLog::GetCurrentTime().c_str(), S3fsLog::GetLevelString(level), __FILE__, __func__, __LINE__, __VA_ARGS__); \
S3fsLog::Flush(); \
}else{ \
syslog(S3fsLog::GetSyslogLevel(level), "%s%s:%s(%d): " fmt "%s", instance_name.c_str(), __FILE__, __func__, __LINE__, __VA_ARGS__); \
} \
} \
s3fs_low_logprn(level, __FILE__, __func__, __LINE__, fmt, ##__VA_ARGS__); \
}while(0)
void s3fs_low_logprn2(S3fsLog::s3fs_log_level level, int nest, const char* file, const char *func, int line, const char *fmt, ...) __attribute__ ((format (printf, 6, 7)));
#define S3FS_LOW_LOGPRN2(level, nest, fmt, ...) \
do{ \
if(S3fsLog::IsS3fsLogLevel(level)){ \
if(foreground || S3fsLog::IsSetLogFile()){ \
S3fsLog::SeekEnd(); \
fprintf(S3fsLog::GetOutputLogFile(), "%s%s%s%s:%s(%d): " fmt "%s\n", S3fsLog::GetCurrentTime().c_str(), S3fsLog::GetLevelString(level), S3fsLog::GetS3fsLogNest(nest), __FILE__, __func__, __LINE__, __VA_ARGS__); \
S3fsLog::Flush(); \
}else{ \
syslog(S3fsLog::GetSyslogLevel(level), "%s%s" fmt "%s", instance_name.c_str(), S3fsLog::GetS3fsLogNest(nest), __VA_ARGS__); \
} \
} \
s3fs_low_logprn2(level, nest, __FILE__, __func__, __LINE__, fmt, ##__VA_ARGS__); \
}while(0)
#define S3FS_LOW_CURLDBG(fmt, ...) \
@ -229,14 +216,14 @@ class S3fsLog
// small trick for VA_ARGS
//
#define S3FS_PRN_EXIT(fmt, ...) S3FS_LOW_LOGPRN_EXIT(fmt, ##__VA_ARGS__, "")
#define S3FS_PRN_CRIT(fmt, ...) S3FS_LOW_LOGPRN(S3fsLog::LEVEL_CRIT, fmt, ##__VA_ARGS__, "")
#define S3FS_PRN_ERR(fmt, ...) S3FS_LOW_LOGPRN(S3fsLog::LEVEL_ERR, fmt, ##__VA_ARGS__, "")
#define S3FS_PRN_WARN(fmt, ...) S3FS_LOW_LOGPRN(S3fsLog::LEVEL_WARN, fmt, ##__VA_ARGS__, "")
#define S3FS_PRN_DBG(fmt, ...) S3FS_LOW_LOGPRN(S3fsLog::LEVEL_DBG, fmt, ##__VA_ARGS__, "")
#define S3FS_PRN_INFO(fmt, ...) S3FS_LOW_LOGPRN2(S3fsLog::LEVEL_INFO, 0, fmt, ##__VA_ARGS__, "")
#define S3FS_PRN_INFO1(fmt, ...) S3FS_LOW_LOGPRN2(S3fsLog::LEVEL_INFO, 1, fmt, ##__VA_ARGS__, "")
#define S3FS_PRN_INFO2(fmt, ...) S3FS_LOW_LOGPRN2(S3fsLog::LEVEL_INFO, 2, fmt, ##__VA_ARGS__, "")
#define S3FS_PRN_INFO3(fmt, ...) S3FS_LOW_LOGPRN2(S3fsLog::LEVEL_INFO, 3, fmt, ##__VA_ARGS__, "")
#define S3FS_PRN_CRIT(fmt, ...) S3FS_LOW_LOGPRN(S3fsLog::LEVEL_CRIT, fmt, ##__VA_ARGS__)
#define S3FS_PRN_ERR(fmt, ...) S3FS_LOW_LOGPRN(S3fsLog::LEVEL_ERR, fmt, ##__VA_ARGS__)
#define S3FS_PRN_WARN(fmt, ...) S3FS_LOW_LOGPRN(S3fsLog::LEVEL_WARN, fmt, ##__VA_ARGS__)
#define S3FS_PRN_DBG(fmt, ...) S3FS_LOW_LOGPRN(S3fsLog::LEVEL_DBG, fmt, ##__VA_ARGS__)
#define S3FS_PRN_INFO(fmt, ...) S3FS_LOW_LOGPRN2(S3fsLog::LEVEL_INFO, 0, fmt, ##__VA_ARGS__)
#define S3FS_PRN_INFO1(fmt, ...) S3FS_LOW_LOGPRN2(S3fsLog::LEVEL_INFO, 1, fmt, ##__VA_ARGS__)
#define S3FS_PRN_INFO2(fmt, ...) S3FS_LOW_LOGPRN2(S3fsLog::LEVEL_INFO, 2, fmt, ##__VA_ARGS__)
#define S3FS_PRN_INFO3(fmt, ...) S3FS_LOW_LOGPRN2(S3fsLog::LEVEL_INFO, 3, fmt, ##__VA_ARGS__)
#define S3FS_PRN_CURL(fmt, ...) S3FS_LOW_CURLDBG(fmt, ##__VA_ARGS__, "")
#define S3FS_PRN_CACHE(fp, ...) S3FS_LOW_CACHE(fp, ##__VA_ARGS__, "")

View File

@ -21,6 +21,16 @@
#ifndef S3FS_S3FS_UTIL_H_
#define S3FS_S3FS_UTIL_H_
#ifndef CLOCK_REALTIME
#define CLOCK_REALTIME 0
#endif
#ifndef CLOCK_MONOTONIC
#define CLOCK_MONOTONIC CLOCK_REALTIME
#endif
#ifndef CLOCK_MONOTONIC_COARSE
#define CLOCK_MONOTONIC_COARSE CLOCK_MONOTONIC
#endif
//-------------------------------------------------------------------
// Functions
//-------------------------------------------------------------------

View File

@ -25,41 +25,58 @@
#include "s3fs.h"
#include "s3fs_xml.h"
#include "s3fs_util.h"
#include "autolock.h"
//-------------------------------------------------------------------
// Variables
//-------------------------------------------------------------------
static const char c_strErrorObjectName[] = "FILE or SUBDIR in DIR";
// [NOTE]
// mutex for static variables in GetXmlNsUrl
//
static pthread_mutex_t* pxml_parser_mutex = NULL;
//-------------------------------------------------------------------
// Functions
//-------------------------------------------------------------------
static bool GetXmlNsUrl(xmlDocPtr doc, std::string& nsurl)
{
static time_t tmLast = 0; // cache for 60 sec.
static std::string strNs;
bool result = false;
if(!doc){
return false;
if(!pxml_parser_mutex || !doc){
return result;
}
if((tmLast + 60) < time(NULL)){
// refresh
tmLast = time(NULL);
strNs = "";
xmlNodePtr pRootNode = xmlDocGetRootElement(doc);
if(pRootNode){
xmlNsPtr* nslist = xmlGetNsList(doc, pRootNode);
if(nslist){
if(nslist[0] && nslist[0]->href){
strNs = (const char*)(nslist[0]->href);
std::string tmpNs;
{
static time_t tmLast = 0; // cache for 60 sec.
static std::string strNs;
AutoLock lock(pxml_parser_mutex);
if((tmLast + 60) < time(NULL)){
// refresh
tmLast = time(NULL);
strNs = "";
xmlNodePtr pRootNode = xmlDocGetRootElement(doc);
if(pRootNode){
xmlNsPtr* nslist = xmlGetNsList(doc, pRootNode);
if(nslist){
if(nslist[0] && nslist[0]->href){
int len = xmlStrlen(nslist[0]->href);
if(0 < len){
strNs = std::string((const char*)(nslist[0]->href), len);
}
}
S3FS_XMLFREE(nslist);
}
S3FS_XMLFREE(nslist);
}
}
tmpNs = strNs;
}
if(!strNs.empty()){
nsurl = strNs;
if(!tmpNs.empty()){
nsurl = tmpNs;
result = true;
}
return result;
@ -144,12 +161,12 @@ static char* get_object_name(xmlDocPtr doc, xmlNodePtr node, const char* path)
const char* basepath= (path && '/' == path[0]) ? &path[1] : path;
xmlFree(fullpath);
if(!mybname || '\0' == mybname[0]){
if('\0' == mybname[0]){
return NULL;
}
// check subdir & file in subdir
if(dirpath && 0 < strlen(dirpath)){
if(0 < strlen(dirpath)){
// case of "/"
if(0 == strcmp(mybname, "/") && 0 == strcmp(dirpath, "/")){
return (char*)c_strErrorObjectName;
@ -175,6 +192,8 @@ static char* get_object_name(xmlDocPtr doc, xmlNodePtr node, const char* path)
if(strlen(dirpath) > strlen(basepath)){
withdirname = &dirpath[strlen(basepath)];
}
// cppcheck-suppress unmatchedSuppression
// cppcheck-suppress knownConditionTrueFalse
if(!withdirname.empty() && '/' != *withdirname.rbegin()){
withdirname += "/";
}
@ -492,6 +511,44 @@ bool simple_parse_xml(const char* data, size_t len, const char* key, std::string
return result;
}
//-------------------------------------------------------------------
// Utility for lock
//-------------------------------------------------------------------
bool init_parser_xml_lock()
{
if(pxml_parser_mutex){
return false;
}
pxml_parser_mutex = new pthread_mutex_t;
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
#if S3FS_PTHREAD_ERRORCHECK
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_ERRORCHECK);
#endif
if(0 != pthread_mutex_init(pxml_parser_mutex, &attr)){
delete pxml_parser_mutex;
pxml_parser_mutex = NULL;
return false;
}
return true;
}
bool destroy_parser_xml_lock()
{
if(!pxml_parser_mutex){
return false;
}
if(0 != pthread_mutex_destroy(pxml_parser_mutex)){
return false;
}
delete pxml_parser_mutex;
pxml_parser_mutex = NULL;
return true;
}
/*
* Local variables:
* tab-width: 4

View File

@ -42,6 +42,9 @@ bool get_incomp_mpu_list(xmlDocPtr doc, incomp_mpu_list_t& list);
bool simple_parse_xml(const char* data, size_t len, const char* key, std::string& value);
bool init_parser_xml_lock();
bool destroy_parser_xml_lock();
#endif // S3FS_S3FS_XML_H_
/*

View File

@ -69,6 +69,24 @@ template<> std::string str(struct timespec value) {
// Functions
//-------------------------------------------------------------------
#ifdef __MSYS__
/*
* Polyfill for strptime function
*
* This source code is from https://gist.github.com/jeremyfromearth/5694aa3a66714254752179ecf3c95582 .
*/
char* strptime(const char* s, const char* f, struct tm* tm)
{
std::istringstream input(s);
input.imbue(std::locale(setlocale(LC_ALL, nullptr)));
input >> std::get_time(tm, f);
if (input.fail()) {
return nullptr;
}
return (char*)(s + input.tellg());
}
#endif
bool s3fs_strtoofft(off_t* value, const char* str, int base)
{
if(value == NULL || str == NULL){
@ -390,7 +408,7 @@ char* s3fs_base64(const unsigned char* input, size_t length)
if(!input || 0 == length){
return NULL;
}
result = new char[((length / 3) + 1) * 4 + 1];
result = new char[((length + 3 - 1) / 3) * 4 + 1];
unsigned char parts[4];
size_t rpos;
@ -432,16 +450,15 @@ inline unsigned char char_decode64(const char ch)
return by;
}
unsigned char* s3fs_decode64(const char* input, size_t* plength)
unsigned char* s3fs_decode64(const char* input, size_t input_len, size_t* plength)
{
unsigned char* result;
if(!input || 0 == strlen(input) || !plength){
if(!input || 0 == input_len || !plength){
return NULL;
}
result = new unsigned char[strlen(input) + 1];
result = new unsigned char[input_len / 4 * 3];
unsigned char parts[4];
size_t input_len = strlen(input);
size_t rpos;
size_t wpos;
for(rpos = 0, wpos = 0; rpos < input_len; rpos += 4){
@ -460,7 +477,6 @@ unsigned char* s3fs_decode64(const char* input, size_t* plength)
}
result[wpos++] = ((parts[2] << 6) & 0xc0) | (parts[3] & 0x3f);
}
result[wpos] = '\0';
*plength = wpos;
return result;
}

View File

@ -54,6 +54,12 @@ template <class T> std::string str(T value);
//-------------------------------------------------------------------
// Utilities
//-------------------------------------------------------------------
#ifdef __MSYS__
//
// Polyfill for strptime function.
//
char* strptime(const char* s, const char* f, struct tm* tm);
#endif
//
// Convert string to off_t. Returns false on bad input.
// Replacement for C++11 std::stoll.
@ -99,7 +105,7 @@ bool get_keyword_value(const std::string& target, const char* keyword, std::stri
std::string s3fs_hex_lower(const unsigned char* input, size_t length);
std::string s3fs_hex_upper(const unsigned char* input, size_t length);
char* s3fs_base64(const unsigned char* input, size_t length);
unsigned char* s3fs_decode64(const char* input, size_t* plength);
unsigned char* s3fs_decode64(const char* input, size_t input_len, size_t* plength);
//
// WTF8

View File

@ -24,6 +24,34 @@
#include "curl_util.h"
#include "test_util.h"
//---------------------------------------------------------
// S3fsCred Stub
//
// [NOTE]
// This test program links curl_util.cpp just to use the
// curl_slist_sort_insert function.
// This file has a call to S3fsCred::GetBucket(), which
// results in a link error. That method is not used in
// this test file, so define a stub class. Linking all
// implementation of the S3fsCred class or making all
// stubs is not practical, so this is the best answer.
//
class S3fsCred
{
private:
static std::string bucket_name;
public:
static const std::string& GetBucket();
};
std::string S3fsCred::bucket_name;
const std::string& S3fsCred::GetBucket()
{
return S3fsCred::bucket_name;
}
//---------------------------------------------------------
#define ASSERT_IS_SORTED(x) assert_is_sorted((x), __FILE__, __LINE__)
void assert_is_sorted(struct curl_slist* list, const char *file, int line)

View File

@ -64,23 +64,35 @@ void test_trim()
void test_base64()
{
unsigned char *buf;
size_t len;
ASSERT_STREQUALS(s3fs_base64(NULL, 0), NULL);
ASSERT_STREQUALS(reinterpret_cast<const char *>(s3fs_decode64(NULL, &len)), NULL);
buf = s3fs_decode64(NULL, 0, &len);
ASSERT_BUFEQUALS(reinterpret_cast<const char *>(buf), len, NULL, 0);
ASSERT_STREQUALS(s3fs_base64(reinterpret_cast<const unsigned char *>(""), 0), NULL);
ASSERT_STREQUALS(reinterpret_cast<const char *>(s3fs_decode64("", &len)), NULL);
buf = s3fs_decode64("", 0, &len);
ASSERT_BUFEQUALS(reinterpret_cast<const char *>(buf), len, NULL, 0);
ASSERT_STREQUALS(s3fs_base64(reinterpret_cast<const unsigned char *>("1"), 1), "MQ==");
ASSERT_STREQUALS(reinterpret_cast<const char *>(s3fs_decode64("MQ==", &len)), "1");
buf = s3fs_decode64("MQ==", 4, &len);
ASSERT_BUFEQUALS(reinterpret_cast<const char *>(buf), len, "1", 1);
ASSERT_EQUALS(len, static_cast<size_t>(1));
ASSERT_STREQUALS(s3fs_base64(reinterpret_cast<const unsigned char *>("12"), 2), "MTI=");
ASSERT_STREQUALS(reinterpret_cast<const char *>(s3fs_decode64("MTI=", &len)), "12");
buf = s3fs_decode64("MTI=", 4, &len);
ASSERT_BUFEQUALS(reinterpret_cast<const char *>(buf), len, "12", 2);
ASSERT_EQUALS(len, static_cast<size_t>(2));
ASSERT_STREQUALS(s3fs_base64(reinterpret_cast<const unsigned char *>("123"), 3), "MTIz");
ASSERT_STREQUALS(reinterpret_cast<const char *>(s3fs_decode64("MTIz", &len)), "123");
buf = s3fs_decode64("MTIz", 4, &len);
ASSERT_BUFEQUALS(reinterpret_cast<const char *>(buf), len, "123", 3);
ASSERT_EQUALS(len, static_cast<size_t>(3));
ASSERT_STREQUALS(s3fs_base64(reinterpret_cast<const unsigned char *>("1234"), 4), "MTIzNA==");
ASSERT_STREQUALS(reinterpret_cast<const char *>(s3fs_decode64("MTIzNA==", &len)), "1234");
buf = s3fs_decode64("MTIzNA==", 8, &len);
ASSERT_BUFEQUALS(reinterpret_cast<const char *>(buf), len, "1234", 4);
ASSERT_EQUALS(len, static_cast<size_t>(4));
// TODO: invalid input

View File

@ -76,11 +76,23 @@ void assert_strequals(const char *x, const char *y, const char *file, int line)
}
}
void assert_bufequals(const char *x, size_t len1, const char *y, size_t len2, const char *file, int line)
{
if(x == NULL && y == NULL){
return;
// cppcheck-suppress nullPointerRedundantCheck
} else if(x == NULL || y == NULL || len1 != len2 || memcmp(x, y, len1) != 0){
std::cerr << (x ? std::string(x, len1) : "null") << " != " << (y ? std::string(y, len2) : "null") << " at " << file << ":" << line << std::endl;
std::exit(1);
}
}
#define ASSERT_TRUE(x) assert_equals((x), true, __FILE__, __LINE__)
#define ASSERT_FALSE(x) assert_equals((x), false, __FILE__, __LINE__)
#define ASSERT_EQUALS(x, y) assert_equals((x), (y), __FILE__, __LINE__)
#define ASSERT_NEQUALS(x, y) assert_nequals((x), (y), __FILE__, __LINE__)
#define ASSERT_STREQUALS(x, y) assert_strequals((x), (y), __FILE__, __LINE__)
#define ASSERT_BUFEQUALS(x, len1, y, len2) assert_bufequals((x), (len1), (y), (len2), __FILE__, __LINE__)
#endif // S3FS_TEST_UTIL_H_

View File

@ -174,7 +174,29 @@ enum signature_type_t {
//----------------------------------------------
// etaglist_t / filepart / untreatedpart
//----------------------------------------------
typedef std::list<std::string> etaglist_t;
//
// Etag string and part number pair
//
struct etagpair
{
std::string etag; // expected etag value
int part_num; // part number
etagpair(const char* petag = NULL, int part = -1) : etag(petag ? petag : ""), part_num(part) {}
~etagpair()
{
clear();
}
void clear()
{
etag.erase();
part_num = -1;
}
};
typedef std::list<etagpair> etaglist_t;
//
// Each part information for Multipart upload
@ -187,9 +209,9 @@ struct filepart
off_t startpos; // seek fd point for uploading
off_t size; // uploading size
bool is_copy; // whether is copy multipart
std::string* petag; // use only parallel upload
etagpair* petag; // use only parallel upload
filepart(bool is_uploaded = false, int _fd = -1, off_t part_start = 0, off_t part_size = -1, bool is_copy_part = false, std::string* petag = NULL) : uploaded(false), fd(_fd), startpos(part_start), size(part_size), is_copy(is_copy_part), petag(petag) {}
filepart(bool is_uploaded = false, int _fd = -1, off_t part_start = 0, off_t part_size = -1, bool is_copy_part = false, etagpair* petagpair = NULL) : uploaded(false), fd(_fd), startpos(part_start), size(part_size), is_copy(is_copy_part), petag(petagpair) {}
~filepart()
{
@ -207,16 +229,27 @@ struct filepart
petag = NULL;
}
void add_etag_list(etaglist_t* list)
void add_etag_list(etaglist_t& list, int partnum = -1)
{
list->push_back(std::string());
petag = &list->back();
if(-1 == partnum){
partnum = static_cast<int>(list.size()) + 1;
}
list.push_back(etagpair(NULL, partnum));
petag = &list.back();
}
void add_etag(std::string* petagobj)
void set_etag(etagpair* petagobj)
{
petag = petagobj;
}
int get_part_number()
{
if(!petag){
return -1;
}
return petag->part_num;
}
};
typedef std::list<filepart> filepart_list_t;

View File

@ -22,7 +22,6 @@ TESTS=small-integration-test.sh
EXTRA_DIST = \
integration-test-common.sh \
require-root.sh \
small-integration-test.sh \
mergedir.sh \
sample_delcache.sh \
@ -30,6 +29,13 @@ EXTRA_DIST = \
testdir = test
noinst_PROGRAMS = \
junk_data \
write_multiblock
junk_data_SOURCES = junk_data.c
write_multiblock_SOURCES = write_multiblock.cc
#
# Local variables:
# tab-width: 4

View File

@ -28,25 +28,25 @@ func_usage()
echo ""
}
PRGNAME=`basename $0`
SCRIPTDIR=`dirname $0`
S3FSDIR=`cd ${SCRIPTDIR}/..; pwd`
TOPDIR=`cd ${S3FSDIR}/test; pwd`
PRGNAME=$(basename "$0")
SCRIPTDIR=$(dirname "$0")
S3FSDIR=$(cd "${SCRIPTDIR}"/.. || exit 1; pwd)
TOPDIR=$(cd "${S3FSDIR}"/test || exit 1; pwd)
SUITELOG="${TOPDIR}/test-suite.log"
TMP_LINENO_FILE="/tmp/.lineno.tmp"
while [ $# -ne 0 ]; do
if [ "X$1" = "X" ]; then
break
elif [ "X$1" = "X-h" -o "X$1" = "X-H" -o "X$1" = "X--help" -o "X$1" = "X--HELP" ]; then
func_usage ${PRGNAME}
elif [ "X$1" = "X-h" ] || [ "X$1" = "X-H" ] || [ "X$1" = "X--help" ] || [ "X$1" = "X--HELP" ]; then
func_usage "${PRGNAME}"
exit 0
else
SUITELOG=$1
fi
shift
done
if [ ! -f ${SUITELOG} ]; then
if [ ! -f "${SUITELOG}" ]; then
echo "[ERROR] not found ${SUITELOG} log file."
exit 1
fi
@ -59,75 +59,77 @@ fi
# 2 : passed line of end of one small test(specified in test-utils.sh)
# 3 : failed line of end of one small test(specified in test-utils.sh)
#
grep -n -e 'test_.*: ".*"' -o -e 'test_.* passed' -o -e 'test_.* failed' ${SUITELOG} 2>/dev/null | sed 's/:test_.*: ".*"/ 1/g' | sed 's/:test_.* passed/ 2/g' | sed 's/:test_.* failed/ 3/g' > ${TMP_LINENO_FILE}
grep -n -e 'test_.*: ".*"' -o -e 'test_.* passed' -o -e 'test_.* failed' "${SUITELOG}" 2>/dev/null | sed 's/:test_.*: ".*"/ 1/g' | sed 's/:test_.* passed/ 2/g' | sed 's/:test_.* failed/ 3/g' > "${TMP_LINENO_FILE}"
#
# Loop for printing result
#
prev_line_type=0
prev_line_number=1
while read line; do
while read -r line; do
# line is "<line number> <line type>"
number_type=($line)
#
# shellcheck disable=SC2206
number_type=(${line})
head_line_cnt=`expr ${number_type[0]} - 1`
tail_line_cnt=`expr ${number_type[0]} - ${prev_line_number}`
head_line_cnt=$((number_type[0] - 1))
tail_line_cnt=$((number_type[0] - prev_line_number))
if [ ${number_type[1]} -eq 2 ]; then
if [ "${number_type[1]}" -eq 2 ]; then
echo ""
fi
if [ ${prev_line_type} -eq 1 ]; then
if [ ${number_type[1]} -eq 2 ]; then
if [ "${prev_line_type}" -eq 1 ]; then
if [ "${number_type[1]}" -eq 2 ]; then
# if passed, cut s3fs information messages
head -${head_line_cnt} ${SUITELOG} | tail -${tail_line_cnt} | grep -v -e '[0-9]\+\%' | grep -v -e '^s3fs: ' -a -e '\[INF\]'
elif [ ${number_type[1]} -eq 3 ]; then
head "-${head_line_cnt}" "${SUITELOG}" | tail "-${tail_line_cnt}" | grep -v -e '[0-9]\+\%' | grep -v -e '^s3fs: ' -a -e '\[INF\]'
elif [ "${number_type[1]}" -eq 3 ]; then
# if failed, print all
head -${head_line_cnt} ${SUITELOG} | tail -${tail_line_cnt} | grep -v -e '[0-9]\+\%'
head "-${head_line_cnt}" "${SUITELOG}" | tail "-${tail_line_cnt}" | grep -v -e '[0-9]\+\%'
else
# there is start keyword but not end keyword, so print all
head -${head_line_cnt} ${SUITELOG} | tail -${tail_line_cnt} | grep -v -e '[0-9]\+\%'
head "-${head_line_cnt}" "${SUITELOG}" | tail "-${tail_line_cnt}" | grep -v -e '[0-9]\+\%'
fi
elif [ ${prev_line_type} -eq 2 -o ${prev_line_type} -eq 3 ]; then
if [ ${number_type[1]} -eq 2 -o ${number_type[1]} -eq 3 ]; then
elif [ "${prev_line_type}" -eq 2 ] || [ "${prev_line_type}" -eq 3 ]; then
if [ "${number_type[1]}" -eq 2 ] || [ "${number_type[1]}" -eq 3 ]; then
# previous is end of chmpx, but this type is end of chmpx without start keyword. then print all
head -${head_line_cnt} ${SUITELOG} | tail -${tail_line_cnt} | grep -v -e '[0-9]\+\%'
head "-${head_line_cnt}" "${SUITELOG}" | tail "-${tail_line_cnt}" | grep -v -e '[0-9]\+\%'
else
# this area is not from start to end, cut s3fs information messages
head -${head_line_cnt} ${SUITELOG} | tail -${tail_line_cnt} | grep -v -e '[0-9]\+\%' | grep -v -e '^s3fs: ' -a -e '\[INF\]'
head "-${head_line_cnt}" "${SUITELOG}" | tail "-${tail_line_cnt}" | grep -v -e '[0-9]\+\%' | grep -v -e '^s3fs: ' -a -e '\[INF\]'
fi
else
if [ ${number_type[1]} -eq 2 -o ${number_type[1]} -eq 3 ]; then
if [ "${number_type[1]}" -eq 2 ] || [ "${number_type[1]}" -eq 3 ]; then
# previous is normal, but this type is end of chmpx without start keyword. then print all
head -${head_line_cnt} ${SUITELOG} | tail -${tail_line_cnt} | grep -v -e '[0-9]\+\%'
head "-${head_line_cnt}" "${SUITELOG}" | tail "-${tail_line_cnt}" | grep -v -e '[0-9]\+\%'
else
# this area is normal, cut s3fs information messages
head -${head_line_cnt} ${SUITELOG} | tail -${tail_line_cnt} | grep -v -e '[0-9]\+\%' | grep -v -e '^s3fs: ' -a -e '\[INF\]'
head "-${head_line_cnt}" "${SUITELOG}" | tail "-${tail_line_cnt}" | grep -v -e '[0-9]\+\%' | grep -v -e '^s3fs: ' -a -e '\[INF\]'
fi
fi
if [ ${number_type[1]} -eq 3 ]; then
if [ "${number_type[1]}" -eq 3 ]; then
echo ""
fi
prev_line_type=${number_type[1]}
prev_line_number=${number_type[0]}
prev_line_type="${number_type[1]}"
prev_line_number="${number_type[0]}"
done < ${TMP_LINENO_FILE}
done < "${TMP_LINENO_FILE}"
#
# Print rest lines
#
file_line_cnt=`wc -l ${SUITELOG} | awk '{print $1}'`
tail_line_cnt=`expr ${file_line_cnt} - ${prev_line_number}`
file_line_cnt=$(wc -l "${SUITELOG}" | awk '{print $1}')
tail_line_cnt=$((file_line_cnt - prev_line_number))
if [ ${prev_line_type} -eq 1 ]; then
tail -${tail_line_cnt} ${SUITELOG} | grep -v -e '[0-9]\+\%'
if [ "${prev_line_type}" -eq 1 ]; then
tail "-${tail_line_cnt}" "${SUITELOG}" | grep -v -e '[0-9]\+\%'
else
tail -${tail_line_cnt} ${SUITELOG} | grep -v -e '[0-9]\+\%' | grep -v -e '^s3fs: ' -a -e '\[INF\]'
tail "-${tail_line_cnt}" "${SUITELOG}" | grep -v -e '[0-9]\+\%' | grep -v -e '^s3fs: ' -a -e '\[INF\]'
fi
#
# Remove temp file
#
rm -f ${TMP_LINENO_FILE}
rm -f "${TMP_LINENO_FILE}"
exit 0

View File

@ -66,62 +66,65 @@ set -o pipefail
S3FS=../src/s3fs
# Allow these defaulted values to be overridden
: ${S3_URL:="https://127.0.0.1:8080"}
: ${S3_ENDPOINT:="us-east-1"}
: ${S3FS_CREDENTIALS_FILE:="passwd-s3fs"}
: ${TEST_BUCKET_1:="s3fs-integration-test"}
: "${S3_URL:="https://127.0.0.1:8080"}"
: "${S3_ENDPOINT:="us-east-1"}"
: "${S3FS_CREDENTIALS_FILE:="passwd-s3fs"}"
: "${TEST_BUCKET_1:="s3fs-integration-test"}"
export TEST_BUCKET_1
export S3_URL
export S3_ENDPOINT
export TEST_SCRIPT_DIR=`pwd`
TEST_SCRIPT_DIR=$(pwd)
export TEST_SCRIPT_DIR
export TEST_BUCKET_MOUNT_POINT_1=${TEST_BUCKET_1}
S3PROXY_VERSION="1.8.0"
S3PROXY_BINARY=${S3PROXY_BINARY-"s3proxy-${S3PROXY_VERSION}"}
S3PROXY_VERSION="1.9.0"
S3PROXY_BINARY="${S3PROXY_BINARY-"s3proxy-${S3PROXY_VERSION}"}"
CHAOS_HTTP_PROXY_VERSION="1.1.0"
CHAOS_HTTP_PROXY_BINARY="chaos-http-proxy-${CHAOS_HTTP_PROXY_VERSION}"
if [ ! -f "$S3FS_CREDENTIALS_FILE" ]
then
echo "Missing credentials file: $S3FS_CREDENTIALS_FILE"
echo "Missing credentials file: ${S3FS_CREDENTIALS_FILE}"
exit 1
fi
chmod 600 "$S3FS_CREDENTIALS_FILE"
chmod 600 "${S3FS_CREDENTIALS_FILE}"
if [ -z "${S3FS_PROFILE}" ]; then
export AWS_ACCESS_KEY_ID=$(cut -d: -f1 ${S3FS_CREDENTIALS_FILE})
export AWS_SECRET_ACCESS_KEY=$(cut -d: -f2 ${S3FS_CREDENTIALS_FILE})
AWS_ACCESS_KEY_ID=$(cut -d: -f1 "${S3FS_CREDENTIALS_FILE}")
export AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY=$(cut -d: -f2 "${S3FS_CREDENTIALS_FILE}")
export AWS_SECRET_ACCESS_KEY
fi
if [ ! -d $TEST_BUCKET_MOUNT_POINT_1 ]
then
mkdir -p $TEST_BUCKET_MOUNT_POINT_1
if [ ! -d "${TEST_BUCKET_MOUNT_POINT_1}" ]; then
mkdir -p "${TEST_BUCKET_MOUNT_POINT_1}"
fi
# This function execute the function parameters $1 times
# before giving up, with 1 second delays.
function retry {
set +o errexit
N=$1; shift;
status=0
for i in $(seq $N); do
local N="$1"
shift
rc=0
for _ in $(seq "${N}"); do
echo "Trying: $*"
# shellcheck disable=SC2068,SC2294
eval $@
status=$?
if [ $status == 0 ]; then
rc=$?
if [ "${rc}" -eq 0 ]; then
break
fi
sleep 1
echo "Retrying: $*"
done
if [ $status != 0 ]; then
if [ "${rc}" -ne 0 ]; then
echo "timeout waiting for $*"
fi
set -o errexit
return $status
return "${rc}"
}
# Proxy is not started if S3PROXY_BINARY is an empty string
@ -130,20 +133,25 @@ function retry {
#
function start_s3proxy {
if [ -n "${PUBLIC}" ]; then
S3PROXY_CONFIG="s3proxy-noauth.conf"
local S3PROXY_CONFIG="s3proxy-noauth.conf"
else
S3PROXY_CONFIG="s3proxy.conf"
local S3PROXY_CONFIG="s3proxy.conf"
fi
if [ -n "${S3PROXY_BINARY}" ]
then
if [ ! -e "${S3PROXY_BINARY}" ]; then
wget "https://github.com/gaul/s3proxy/releases/download/s3proxy-${S3PROXY_VERSION}/s3proxy" \
--quiet -O "${S3PROXY_BINARY}"
curl "https://github.com/gaul/s3proxy/releases/download/s3proxy-${S3PROXY_VERSION}/s3proxy" \
--fail --location --silent --output "${S3PROXY_BINARY}"
chmod +x "${S3PROXY_BINARY}"
fi
${STDBUF_BIN} -oL -eL java -jar "$S3PROXY_BINARY" --properties $S3PROXY_CONFIG &
# generate self-signed SSL certificate
rm -f /tmp/keystore.jks /tmp/keystore.pem
echo -e 'password\npassword\n\n\n\n\n\n\nyes' | keytool -genkey -keystore /tmp/keystore.jks -keyalg RSA -keysize 2048 -validity 365 -ext SAN=IP:127.0.0.1
echo password | keytool -exportcert -keystore /tmp/keystore.jks -rfc -file /tmp/keystore.pem
"${STDBUF_BIN}" -oL -eL java -jar "${S3PROXY_BINARY}" --properties "${S3PROXY_CONFIG}" &
S3PROXY_PID=$!
# wait for S3Proxy to start
@ -152,12 +160,12 @@ function start_s3proxy {
if [ -n "${CHAOS_HTTP_PROXY}" ]; then
if [ ! -e "${CHAOS_HTTP_PROXY_BINARY}" ]; then
wget "https://github.com/bouncestorage/chaos-http-proxy/releases/download/chaos-http-proxy-${CHAOS_HTTP_PROXY_VERSION}/chaos-http-proxy" \
--quiet -O "${CHAOS_HTTP_PROXY_BINARY}"
curl "https://github.com/bouncestorage/chaos-http-proxy/releases/download/chaos-http-proxy-${CHAOS_HTTP_PROXY_VERSION}/chaos-http-proxy" \
--fail --location --silent --output "${CHAOS_HTTP_PROXY_BINARY}"
chmod +x "${CHAOS_HTTP_PROXY_BINARY}"
fi
${STDBUF_BIN} -oL -eL java -jar ${CHAOS_HTTP_PROXY_BINARY} --properties chaos-http-proxy.conf &
"${STDBUF_BIN}" -oL -eL java -jar "${CHAOS_HTTP_PROXY_BINARY}" --properties chaos-http-proxy.conf &
CHAOS_HTTP_PROXY_PID=$!
# wait for Chaos HTTP Proxy to start
@ -168,12 +176,12 @@ function start_s3proxy {
function stop_s3proxy {
if [ -n "${S3PROXY_PID}" ]
then
kill $S3PROXY_PID
kill "${S3PROXY_PID}"
fi
if [ -n "${CHAOS_HTTP_PROXY_PID}" ]
then
kill $CHAOS_HTTP_PROXY_PID
kill "${CHAOS_HTTP_PROXY_PID}"
fi
}
@ -182,11 +190,11 @@ function stop_s3proxy {
function start_s3fs {
# Public bucket if PUBLIC is set
if [ -n "${PUBLIC}" ]; then
AUTH_OPT="-o public_bucket=1"
local AUTH_OPT="-o public_bucket=1"
elif [ -n "${S3FS_PROFILE}" ]; then
AUTH_OPT="-o profile=${S3FS_PROFILE}"
local AUTH_OPT="-o profile=${S3FS_PROFILE}"
else
AUTH_OPT="-o passwd_file=${S3FS_CREDENTIALS_FILE}"
local AUTH_OPT="-o passwd_file=${S3FS_CREDENTIALS_FILE}"
fi
# If VALGRIND is set, pass it as options to valgrind.
@ -198,10 +206,10 @@ function start_s3fs {
fi
# On OSX only, we need to specify the direct_io and auto_cache flag.
if [ `uname` = "Darwin" ]; then
DIRECT_IO_OPT="-o direct_io -o auto_cache"
if [ "$(uname)" = "Darwin" ]; then
local DIRECT_IO_OPT="-o direct_io -o auto_cache"
else
DIRECT_IO_OPT=""
local DIRECT_IO_OPT=""
fi
if [ -n "${CHAOS_HTTP_PROXY}" ]; then
@ -213,10 +221,10 @@ function start_s3fs {
# Therefore, when it is macos, it is not executed via stdbuf.
# This patch may be temporary, but no other method has been found at this time.
#
if [ `uname` = "Darwin" ]; then
VIA_STDBUF_CMDLINE=""
if [ "$(uname)" = "Darwin" ]; then
local VIA_STDBUF_CMDLINE=""
else
VIA_STDBUF_CMDLINE="${STDBUF_BIN} -oL -eL"
local VIA_STDBUF_CMDLINE="${STDBUF_BIN} -oL -eL"
fi
# Common s3fs options:
@ -225,9 +233,6 @@ function start_s3fs {
#
# use_path_request_style
# The test env doesn't have virtual hosts
# createbucket
# S3Proxy always starts with no buckets, this tests the s3fs-fuse
# automatic bucket creation path.
# $AUTH_OPT
# Will be either "-o public_bucket=1"
# or
@ -239,56 +244,57 @@ function start_s3fs {
#
# subshell with set -x to log exact invocation of s3fs-fuse
# shellcheck disable=SC2086
(
set -x
CURL_CA_BUNDLE=/tmp/keystore.pem \
${VIA_STDBUF_CMDLINE} \
${VALGRIND_EXEC} ${S3FS} \
$TEST_BUCKET_1 \
$TEST_BUCKET_MOUNT_POINT_1 \
${VALGRIND_EXEC} \
${S3FS} \
${TEST_BUCKET_1} \
${TEST_BUCKET_MOUNT_POINT_1} \
-o use_path_request_style \
-o url=${S3_URL} \
-o endpoint=${S3_ENDPOINT} \
-o no_check_certificate \
-o ssl_verify_hostname=0 \
-o url="${S3_URL}" \
-o endpoint="${S3_ENDPOINT}" \
-o use_xattr=1 \
-o createbucket \
-o enable_unsigned_payload \
${AUTH_OPT} \
${DIRECT_IO_OPT} \
-o stat_cache_expire=1 \
-o stat_cache_interval_expire=1 \
-o dbglevel=${DBGLEVEL:=info} \
-o dbglevel="${DBGLEVEL:=info}" \
-o no_time_stamp_msg \
-o retries=3 \
-f \
"${@}" &
echo $! >&3
) 3>pid | ${STDBUF_BIN} -oL -eL ${SED_BIN} ${SED_BUFFER_FLAG} "s/^/s3fs: /" &
) 3>pid | "${STDBUF_BIN}" -oL -eL "${SED_BIN}" "${SED_BUFFER_FLAG}" "s/^/s3fs: /" &
sleep 1
export S3FS_PID=$(<pid)
S3FS_PID=$(<pid)
export S3FS_PID
rm -f pid
if [ `uname` = "Darwin" ]; then
set +o errexit
TRYCOUNT=0
while [ $TRYCOUNT -le ${RETRIES:=20} ]; do
df | grep -q $TEST_BUCKET_MOUNT_POINT_1
if [ $? -eq 0 ]; then
if [ "$(uname)" = "Darwin" ]; then
local TRYCOUNT=0
while [ "${TRYCOUNT}" -le "${RETRIES:=20}" ]; do
df | grep -q "${TEST_BUCKET_MOUNT_POINT_1}"
rc=$?
if [ "${rc}" -eq 0 ]; then
break;
fi
sleep 1
TRYCOUNT=`expr ${TRYCOUNT} + 1`
TRYCOUNT=$((TRYCOUNT + 1))
done
if [ $? -ne 0 ]; then
if [ "${rc}" -ne 0 ]; then
exit 1
fi
set -o errexit
else
retry ${RETRIES:=20} grep -q $TEST_BUCKET_MOUNT_POINT_1 /proc/mounts || exit 1
retry "${RETRIES:=20}" grep -q "${TEST_BUCKET_MOUNT_POINT_1}" /proc/mounts || exit 1
fi
# Quick way to start system up for manual testing with options under test
if [[ -n ${INTERACT} ]]; then
echo "Mountpoint $TEST_BUCKET_MOUNT_POINT_1 is ready"
if [[ -n "${INTERACT}" ]]; then
echo "Mountpoint ${TEST_BUCKET_MOUNT_POINT_1} is ready"
echo "control-C to quit"
sleep infinity
exit 0
@ -297,13 +303,13 @@ function start_s3fs {
function stop_s3fs {
# Retry in case file system is in use
if [ `uname` = "Darwin" ]; then
if df | grep -q $TEST_BUCKET_MOUNT_POINT_1; then
retry 10 df "|" grep -q $TEST_BUCKET_MOUNT_POINT_1 "&&" umount $TEST_BUCKET_MOUNT_POINT_1
if [ "$(uname)" = "Darwin" ]; then
if df | grep -q "${TEST_BUCKET_MOUNT_POINT_1}"; then
retry 10 df "|" grep -q "${TEST_BUCKET_MOUNT_POINT_1}" "&&" umount "${TEST_BUCKET_MOUNT_POINT_1}"
fi
else
if grep -q $TEST_BUCKET_MOUNT_POINT_1 /proc/mounts; then
retry 10 grep -q $TEST_BUCKET_MOUNT_POINT_1 /proc/mounts "&&" fusermount -u $TEST_BUCKET_MOUNT_POINT_1
if grep -q "${TEST_BUCKET_MOUNT_POINT_1}" /proc/mounts; then
retry 10 grep -q "${TEST_BUCKET_MOUNT_POINT_1}" /proc/mounts "&&" fusermount -u "${TEST_BUCKET_MOUNT_POINT_1}"
fi
fi
}

File diff suppressed because it is too large Load Diff

51
test/junk_data.c Normal file
View File

@ -0,0 +1,51 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright(C) 2021 Andrew Gaul <andrew@gaul.org>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
// Generate junk data at high speed. An alternative to dd if=/dev/urandom.
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
if (argc != 2) {
return 1;
}
long long count = strtoull(argv[1], NULL, 10);
char buf[128 * 1024];
long long i;
for (i = 0; i < count; i += sizeof(buf)) {
long long j;
for (j = 0; j < sizeof(buf) / sizeof(i); ++j) {
*((long long *)buf + j) = i / sizeof(i) + j;
}
fwrite(buf, 1, sizeof(buf) > count - i ? count - i : sizeof(buf), stdout);
}
return 0;
}
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: expandtab sw=4 ts=4 fdm=marker
* vim<600: expandtab sw=4 ts=4
*/

Binary file not shown.

View File

@ -40,25 +40,25 @@ UsageFunction()
}
### Check parameters
WHOAMI=`whoami`
OWNNAME=`basename $0`
WHOAMI=$(whoami)
OWNNAME=$(basename "$0")
AUTOYES="no"
ALLYES="no"
DIRPARAM=""
while [ "$1" != "" ]; do
if [ "X$1" = "X-help" -o "X$1" = "X-h" -o "X$1" = "X-H" ]; then
UsageFunction $OWNNAME
if [ "X$1" = "X-help" ] || [ "X$1" = "X-h" ] || [ "X$1" = "X-H" ]; then
UsageFunction "${OWNNAME}"
exit 0
elif [ "X$1" = "X-y" -o "X$1" = "X-Y" ]; then
elif [ "X$1" = "X-y" ] || [ "X$1" = "X-Y" ]; then
AUTOYES="yes"
elif [ "X$1" = "X-all" -o "X$1" = "X-ALL" ]; then
elif [ "X$1" = "X-all" ] || [ "X$1" = "X-ALL" ]; then
ALLYES="yes"
else
if [ "X$DIRPARAM" != "X" ]; then
echo "*** Input error."
echo ""
UsageFunction $OWNNAME
UsageFunction "${OWNNAME}"
exit 1
fi
DIRPARAM=$1
@ -68,7 +68,7 @@ done
if [ "X$DIRPARAM" = "X" ]; then
echo "*** Input error."
echo ""
UsageFunction $OWNNAME
UsageFunction "${OWNNAME}"
exit 1
fi
@ -88,18 +88,17 @@ echo "Please execute this program by responsibility of your own."
echo "#############################################################################"
echo ""
DATE=`date +'%Y%m%d-%H%M%S'`
LOGFILE="$OWNNAME-$DATE.log"
DATE=$(date +'%Y%m%d-%H%M%S')
LOGFILE="${OWNNAME}-${DATE}.log"
echo -n "Start to merge directory object... [$DIRPARAM]"
echo "# Start to merge directory object... [$DIRPARAM]" >> $LOGFILE
echo -n "# DATE : " >> $LOGFILE
echo `date` >> $LOGFILE
echo -n "# BASEDIR : " >> $LOGFILE
echo `pwd` >> $LOGFILE
echo -n "# TARGET PATH : " >> $LOGFILE
echo $DIRPARAM >> $LOGFILE
echo "" >> $LOGFILE
echo "Start to merge directory object... [${DIRPARAM}]"
{
echo "# Start to merge directory object... [${DIRPARAM}]"
echo "# DATE : $(date)"
echo "# BASEDIR : $(pwd)"
echo "# TARGET PATH : ${DIRPARAM}"
echo ""
} > "${LOGFILE}"
if [ "$AUTOYES" = "yes" ]; then
echo "(no confirmation)"
@ -109,80 +108,84 @@ fi
echo ""
### Get Directory list
DIRLIST=`find $DIRPARAM -type d -print | grep -v ^\.$`
DIRLIST=$(find "${DIRPARAM}" -type d -print | grep -v ^\.$)
#
# Main loop
#
for DIR in $DIRLIST; do
### Skip "." and ".." directories
BASENAME=`basename $DIR`
if [ "$BASENAME" = "." -o "$BASENAME" = ".." ]; then
BASENAME=$(basename "${DIR}")
if [ "${BASENAME}" = "." ] || [ "${BASENAME}" = ".." ]; then
continue
fi
if [ "$ALLYES" = "no" ]; then
if [ "${ALLYES}" = "no" ]; then
### Skip "d---------" directories.
### Other clients make directory object "dir/" which don't have
### "x-amz-meta-mode" attribute.
### Then these directories is "d---------", it is target directory.
DIRPERMIT=`ls -ld --time-style=+'%Y%m%d%H%M' $DIR | awk '{print $1}'`
if [ "$DIRPERMIT" != "d---------" ]; then
# shellcheck disable=SC2012
DIRPERMIT=$(ls -ld --time-style=+'%Y%m%d%H%M' "${DIR}" | awk '{print $1}')
if [ "${DIRPERMIT}" != "d---------" ]; then
continue
fi
fi
### Confirm
ANSWER=""
if [ "$AUTOYES" = "yes" ]; then
if [ "${AUTOYES}" = "yes" ]; then
ANSWER="y"
fi
while [ "X$ANSWER" != "XY" -a "X$ANSWER" != "Xy" -a "X$ANSWER" != "XN" -a "X$ANSWER" != "Xn" ]; do
echo -n "Do you merge $DIR? (y/n): "
read ANSWER
while [ "X${ANSWER}" != "XY" ] && [ "X${ANSWER}" != "Xy" ] && [ "X${ANSWER}" != "XN" ] && [ "X${ANSWER}" != "Xn" ]; do
printf "%s" "Do you merge ${DIR} ? (y/n): "
read -r ANSWER
done
if [ "X$ANSWER" != "XY" -a "X$ANSWER" != "Xy" ]; then
if [ "X${ANSWER}" != "XY" ] && [ "X${ANSWER}" != "Xy" ]; then
continue
fi
### Do
CHOWN=`ls -ld --time-style=+'%Y%m%d%H%M' $DIR | awk '{print $3":"$4" "$7}'`
CHMOD=`ls -ld --time-style=+'%Y%m%d%H%M' $DIR | awk '{print $7}'`
TOUCH=`ls -ld --time-style=+'%Y%m%d%H%M' $DIR | awk '{print $6" "$7}'`
# shellcheck disable=SC2012
CHOWN=$(ls -ld --time-style=+'%Y%m%d%H%M' "${DIR}" | awk '{print $3":"$4" "$7}')
# shellcheck disable=SC2012
CHMOD=$(ls -ld --time-style=+'%Y%m%d%H%M' "${DIR}" | awk '{print $7}')
# shellcheck disable=SC2012
TOUCH=$(ls -ld --time-style=+'%Y%m%d%H%M' "${DIR}" | awk '{print $6" "$7}')
echo -n "*** Merge $DIR : "
echo -n " $DIR : " >> $LOGFILE
printf "%s" "*** Merge ${DIR} : "
printf "%s" " ${DIR} : " >> "${LOGFILE}"
chmod 755 $CHMOD > /dev/null 2>&1
chmod 755 "${CHMOD}" > /dev/null 2>&1
RESULT=$?
if [ $RESULT -ne 0 ]; then
if [ "${RESULT}" -ne 0 ]; then
echo "Failed(chmod)"
echo "Failed(chmod)" >> $LOGFILE
echo "Failed(chmod)" >> "${LOGFILE}"
continue
fi
chown $CHOWN > /dev/null 2>&1
chown "${CHOWN}" > /dev/null 2>&1
RESULT=$?
if [ $RESULT -ne 0 ]; then
if [ "${RESULT}" -ne 0 ]; then
echo "Failed(chown)"
echo "Failed(chown)" >> $LOGFILE
echo "Failed(chown)" >> "${LOGFILE}"
continue
fi
touch -t $TOUCH > /dev/null 2>&1
touch -t "${TOUCH}" > /dev/null 2>&1
RESULT=$?
if [ $RESULT -ne 0 ]; then
if [ "${RESULT}" -ne 0 ]; then
echo "Failed(touch)"
echo "Failed(touch)" >> $LOGFILE
echo "Failed(touch)" >> "${LOGFILE}"
continue
fi
echo "Succeed"
echo "Succeed" >> $LOGFILE
echo "Succeed" >> "${LOGFILE}"
done
echo ""
echo "" >> $LOGFILE
echo "" >> "${LOGFILE}"
echo "Finished."
echo -n "# Finished : " >> $LOGFILE
echo `date` >> $LOGFILE
echo "# Finished : $(date)" >> "${LOGFILE}"
#
# Local variables:

View File

@ -1,35 +0,0 @@
#!/bin/bash -e
#
# s3fs - FUSE-based file system backed by Amazon S3
#
# Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
if [[ $EUID -ne 0 ]]
then
echo "This test script must be run as root" 1>&2
exit 1
fi
#
# Local variables:
# tab-width: 4
# c-basic-offset: 4
# End:
# vim600: expandtab sw=4 ts=4 fdm=marker
# vim<600: expandtab sw=4 ts=4
#

View File

@ -2,7 +2,7 @@ s3proxy.secure-endpoint=https://127.0.0.1:8080
s3proxy.authorization=aws-v2-or-v4
s3proxy.identity=local-identity
s3proxy.credential=local-credential
s3proxy.keystore-path=keystore.jks
s3proxy.keystore-path=/tmp/keystore.jks
s3proxy.keystore-password=password
jclouds.provider=transient

View File

@ -46,33 +46,33 @@ func_usage()
echo ""
}
PRGNAME=`basename $0`
PRGNAME=$(basename "$0")
if [ "X$1" = "X-h" -o "X$1" = "X-H" ]; then
func_usage $PRGNAME
if [ "X$1" = "X-h" ] || [ "X$1" = "X-H" ]; then
func_usage "${PRGNAME}"
exit 0
fi
if [ "X$1" = "X" -o "X$2" = "X" -o "X$3" = "X" ]; then
func_usage $PRGNAME
if [ "X$1" = "X" ] || [ "X$2" = "X" ] || [ "X$3" = "X" ]; then
func_usage "${PRGNAME}"
exit 1
fi
BUCKET=$1
BUCKET="$1"
CDIR="$2"
LIMIT=$3
LIMIT="$3"
SILENT=0
if [ "X$4" = "X-silent" ]; then
SILENT=1
fi
FILES_CDIR="${CDIR}/${BUCKET}"
STATS_CDIR="${CDIR}/.${BUCKET}.stat"
CURRENT_CACHE_SIZE=`du -sb "$FILES_CDIR" | awk '{print $1}'`
CURRENT_CACHE_SIZE=$(du -sb "${FILES_CDIR}" | awk '{print $1}')
#
# Check total size
#
if [ $LIMIT -ge $CURRENT_CACHE_SIZE ]; then
if [ "${LIMIT}" -ge "${CURRENT_CACHE_SIZE}" ]; then
if [ $SILENT -ne 1 ]; then
echo "$FILES_CDIR ($CURRENT_CACHE_SIZE) is below allowed $LIMIT"
echo "${FILES_CDIR} (${CURRENT_CACHE_SIZE}) is below allowed ${LIMIT}"
fi
exit 0
fi
@ -86,37 +86,36 @@ TMP_CFILE=""
#
# Make file list by sorted access time
#
find "$STATS_CDIR" -type f -exec stat -c "%X:%n" "{}" \; | sort | while read part
find "${STATS_CDIR}" -type f -exec stat -c "%X:%n" "{}" \; | sort | while read -r part
do
echo Looking at $part
TMP_ATIME=`echo "$part" | cut -d: -f1`
TMP_STATS="`echo "$part" | cut -d: -f2`"
TMP_CFILE=`echo "$TMP_STATS" | sed s/\.$BUCKET\.stat/$BUCKET/`
echo "Looking at ${part}"
TMP_ATIME=$(echo "${part}" | cut -d: -f1)
TMP_STATS=$(echo "${part}" | cut -d: -f2)
TMP_CFILE=$(echo "${TMP_STATS}" | sed -e "s/\\.${BUCKET}\\.stat/${BUCKET}/")
if [ `stat -c %X "$TMP_STATS"` -eq $TMP_ATIME ]; then
rm -f "$TMP_STATS" "$TMP_CFILE" > /dev/null 2>&1
if [ $? -ne 0 ]; then
if [ $SILENT -ne 1 ]; then
echo "ERROR: Could not remove files($TMP_STATS,$TMP_CFILE)"
if [ "$(stat -c %X "${TMP_STATS}")" -eq "${TMP_ATIME}" ]; then
if ! rm "${TMP_STATS}" "${TMP_CFILE}" > /dev/null 2>&1; then
if [ "${SILENT}" -ne 1 ]; then
echo "ERROR: Could not remove files(${TMP_STATS},${TMP_CFILE})"
fi
exit 1
else
if [ $SILENT -ne 1 ]; then
echo "remove file: $TMP_CFILE $TMP_STATS"
if [ "${SILENT}" -ne 1 ]; then
echo "remove file: ${TMP_CFILE} ${TMP_STATS}"
fi
fi
fi
if [ $LIMIT -ge `du -sb "$FILES_CDIR" | awk '{print $1}'` ]; then
if [ $SILENT -ne 1 ]; then
if [ "${LIMIT}" -ge "$(du -sb "${FILES_CDIR}" | awk '{print $1}')" ]; then
if [ "${SILENT}" -ne 1 ]; then
echo "finish removing files"
fi
break
fi
done
if [ $SILENT -ne 1 ]; then
TOTAL_SIZE=`du -sb "$FILES_CDIR" | awk '{print $1}'`
echo "Finish: $FILES_CDIR total size is $TOTAL_SIZE"
if [ "${SILENT}" -ne 1 ]; then
TOTAL_SIZE=$(du -sb "${FILES_CDIR}" | awk '{print $1}')
echo "Finish: ${FILES_CDIR} total size is ${TOTAL_SIZE}"
fi
exit 0

View File

@ -26,18 +26,15 @@
set -o errexit
set -o pipefail
# Require root
REQUIRE_ROOT=require-root.sh
#source $REQUIRE_ROOT
source integration-test-common.sh
CACHE_DIR="/tmp/s3fs-cache"
rm -rf "${CACHE_DIR}"
mkdir "${CACHE_DIR}"
#reserve 200MB for data cache
source test-utils.sh
#reserve 200MB for data cache
FAKE_FREE_DISK_SIZE=200
ENSURE_DISKFREE_SIZE=10
@ -48,13 +45,13 @@ if [ -n "${ALL_TESTS}" ]; then
"use_cache=${CACHE_DIR} -o ensure_diskfree=${ENSURE_DISKFREE_SIZE} -o fake_diskfree=${FAKE_FREE_DISK_SIZE}"
enable_content_md5
enable_noobj_cache
max_stat_cache_size=100
"max_stat_cache_size=100"
nocopyapi
nomultipart
notsup_compat_dir
sigv2
sigv4
singlepart_copy_limit=10 # limit size to exercise multipart code paths
"singlepart_copy_limit=10" # limit size to exercise multipart code paths
#use_sse # TODO: S3Proxy does not support SSE
)
else
@ -65,10 +62,15 @@ fi
start_s3proxy
for flag in "${FLAGS[@]}"; do
echo "testing s3fs flag: $flag"
if ! aws_cli s3api head-bucket --bucket "${TEST_BUCKET_1}" --region "${S3_ENDPOINT}"; then
aws_cli s3 mb "s3://${TEST_BUCKET_1}" --region "${S3_ENDPOINT}"
fi
start_s3fs -o $flag
for flag in "${FLAGS[@]}"; do
echo "testing s3fs flag: ${flag}"
# shellcheck disable=SC2086
start_s3fs -o ${flag}
./integration-test-main.sh

View File

@ -24,19 +24,29 @@
set -o errexit
set -o pipefail
#
# Configuration
#
TEST_TEXT="HELLO WORLD"
TEST_TEXT_FILE=test-s3fs.txt
TEST_DIR=testdir
# shellcheck disable=SC2034
ALT_TEST_TEXT_FILE=test-s3fs-ALT.txt
# shellcheck disable=SC2034
TEST_TEXT_FILE_LENGTH=15
# shellcheck disable=SC2034
BIG_FILE=big-file-s3fs.txt
TEMP_DIR=${TMPDIR:-"/var/tmp"}
# shellcheck disable=SC2034
TEMP_DIR="${TMPDIR:-"/var/tmp"}"
# /dev/urandom can only return 32 MB per block maximum
BIG_FILE_BLOCK_SIZE=$((25 * 1024 * 1024))
BIG_FILE_COUNT=1
# This should be greater than the multipart size
BIG_FILE_LENGTH=$(($BIG_FILE_BLOCK_SIZE * $BIG_FILE_COUNT))
# shellcheck disable=SC2034
BIG_FILE_LENGTH=$((BIG_FILE_BLOCK_SIZE * BIG_FILE_COUNT))
# Set locale because some tests check for English expressions
export LC_ALL=en_US.UTF-8
export RUN_DIR
@ -48,7 +58,7 @@ export RUN_DIR
# and uses gnu commands(gstdbuf, gtruncate, gsed).
# Set your PATH appropriately so that you can find these commands.
#
if [ `uname` = "Darwin" ]; then
if [ "$(uname)" = "Darwin" ]; then
export STDBUF_BIN="gstdbuf"
export TRUNCATE_BIN="gtruncate"
export SED_BIN="gsed"
@ -62,7 +72,7 @@ fi
export SED_BUFFER_FLAG="--unbuffered"
function get_xattr() {
if [ `uname` = "Darwin" ]; then
if [ "$(uname)" = "Darwin" ]; then
xattr -p "$1" "$2"
else
getfattr -n "$1" --only-values "$2"
@ -70,7 +80,7 @@ function get_xattr() {
}
function set_xattr() {
if [ `uname` = "Darwin" ]; then
if [ "$(uname)" = "Darwin" ]; then
xattr -w "$1" "$2" "$3"
else
setfattr -n "$1" -v "$2" "$3"
@ -78,7 +88,7 @@ function set_xattr() {
}
function del_xattr() {
if [ `uname` = "Darwin" ]; then
if [ "$(uname)" = "Darwin" ]; then
xattr -d "$1" "$2"
else
setfattr -x "$1" "$2"
@ -86,7 +96,7 @@ function del_xattr() {
}
function get_size() {
if [ `uname` = "Darwin" ]; then
if [ "$(uname)" = "Darwin" ]; then
stat -f "%z" "$1"
else
stat -c %s "$1"
@ -94,49 +104,51 @@ function get_size() {
}
function check_file_size() {
FILE_NAME="$1"
EXPECTED_SIZE="$2"
local FILE_NAME="$1"
local EXPECTED_SIZE="$2"
# Verify file is zero length via metadata
size=$(get_size ${FILE_NAME})
if [ $size -ne $EXPECTED_SIZE ]
local size
size=$(get_size "${FILE_NAME}")
if [ "${size}" -ne "${EXPECTED_SIZE}" ]
then
echo "error: expected ${FILE_NAME} to be zero length"
return 1
fi
# Verify file is zero length via data
size=$(cat ${FILE_NAME} | wc -c)
if [ $size -ne $EXPECTED_SIZE ]
size=$(wc -c < "${FILE_NAME}")
if [ "${size}" -ne "${EXPECTED_SIZE}" ]
then
echo "error: expected ${FILE_NAME} to be $EXPECTED_SIZE length, got $size"
echo "error: expected ${FILE_NAME} to be ${EXPECTED_SIZE} length, got ${size}"
return 1
fi
}
function mk_test_file {
if [ $# == 0 ]; then
TEXT=$TEST_TEXT
if [ $# = 0 ]; then
local TEXT="${TEST_TEXT}"
else
TEXT=$1
local TEXT="$1"
fi
echo $TEXT > $TEST_TEXT_FILE
if [ ! -e $TEST_TEXT_FILE ]
echo "${TEXT}" > "${TEST_TEXT_FILE}"
if [ ! -e "${TEST_TEXT_FILE}" ]
then
echo "Could not create file ${TEST_TEXT_FILE}, it does not exist"
exit 1
fi
# wait & check
BASE_TEXT_LENGTH=`echo $TEXT | wc -c | awk '{print $1}'`
TRY_COUNT=10
local BASE_TEXT_LENGTH; BASE_TEXT_LENGTH=$(echo "${TEXT}" | wc -c | awk '{print $1}')
local TRY_COUNT=10
while true; do
MK_TEXT_LENGTH=`wc -c $TEST_TEXT_FILE | awk '{print $1}'`
if [ $BASE_TEXT_LENGTH -eq $MK_TEXT_LENGTH ]; then
local MK_TEXT_LENGTH
MK_TEXT_LENGTH=$(wc -c "${TEST_TEXT_FILE}" | awk '{print $1}')
if [ "${BASE_TEXT_LENGTH}" -eq "${MK_TEXT_LENGTH}" ]; then
break
fi
TRY_COUNT=`expr $TRY_COUNT - 1`
if [ $TRY_COUNT -le 0 ]; then
local TRY_COUNT=$((TRY_COUNT - 1))
if [ "${TRY_COUNT}" -le 0 ]; then
echo "Could not create file ${TEST_TEXT_FILE}, that file size is something wrong"
fi
sleep 1
@ -144,14 +156,14 @@ function mk_test_file {
}
function rm_test_file {
if [ $# == 0 ]; then
FILE=$TEST_TEXT_FILE
if [ $# = 0 ]; then
local FILE="${TEST_TEXT_FILE}"
else
FILE=$1
local FILE="$1"
fi
rm -f $FILE
rm -f "${FILE}"
if [ -e $FILE ]
if [ -e "${FILE}" ]
then
echo "Could not cleanup file ${TEST_TEXT_FILE}"
exit 1
@ -159,17 +171,17 @@ function rm_test_file {
}
function mk_test_dir {
mkdir ${TEST_DIR}
mkdir "${TEST_DIR}"
if [ ! -d ${TEST_DIR} ]; then
if [ ! -d "${TEST_DIR}" ]; then
echo "Directory ${TEST_DIR} was not created"
exit 1
fi
}
function rm_test_dir {
rmdir ${TEST_DIR}
if [ -e $TEST_DIR ]; then
rmdir "${TEST_DIR}"
if [ -e "${TEST_DIR}" ]; then
echo "Could not remove the test directory, it still exists: ${TEST_DIR}"
exit 1
fi
@ -178,18 +190,18 @@ function rm_test_dir {
# Create and cd to a unique directory for this test run
# Sets RUN_DIR to the name of the created directory
function cd_run_dir {
if [ "$TEST_BUCKET_MOUNT_POINT_1" == "" ]; then
if [ "${TEST_BUCKET_MOUNT_POINT_1}" = "" ]; then
echo "TEST_BUCKET_MOUNT_POINT_1 variable not set"
exit 1
fi
RUN_DIR=${TEST_BUCKET_MOUNT_POINT_1}/${1}
mkdir -p ${RUN_DIR}
cd ${RUN_DIR}
local RUN_DIR="${TEST_BUCKET_MOUNT_POINT_1}/${1}"
mkdir -p "${RUN_DIR}"
cd "${RUN_DIR}"
}
function clean_run_dir {
if [ -d ${RUN_DIR} ]; then
rm -rf ${RUN_DIR} || echo "Error removing ${RUN_DIR}"
if [ -d "${RUN_DIR}" ]; then
rm -rf "${RUN_DIR}" || echo "Error removing ${RUN_DIR}"
fi
}
@ -204,14 +216,14 @@ function init_suite {
# report_pass TEST_NAME
function report_pass {
echo "$1 passed"
TEST_PASSED_LIST+=($1)
TEST_PASSED_LIST+=("$1")
}
# Report a failing test case
# report_fail TEST_NAME
function report_fail {
echo "$1 failed"
TEST_FAILED_LIST+=($1)
TEST_FAILED_LIST+=("$1")
}
# Add tests to the suite
@ -231,43 +243,40 @@ function describe {
# directory in the bucket. An attempt to clean this directory is
# made after the test run.
function run_suite {
orig_dir=$PWD
key_prefix="testrun-$RANDOM"
cd_run_dir $key_prefix
orig_dir="${PWD}"
key_prefix="testrun-${RANDOM}"
cd_run_dir "${key_prefix}"
for t in "${TEST_LIST[@]}"; do
# The following sequence runs tests in a subshell to allow continuation
# on test failure, but still allowing errexit to be in effect during
# the test.
#
# See:
# https://groups.google.com/d/msg/gnu.bash.bug/NCK_0GmIv2M/dkeZ9MFhPOIJ
# Other ways of trying to capture the return value will also disable
# errexit in the function due to bash... compliance with POSIX?
set +o errexit
(set -o errexit; $t $key_prefix)
if [[ $? == 0 ]]; then
report_pass $t
# Ensure test input name differs every iteration
TEST_TEXT_FILE="test-s3fs-${RANDOM}.txt"
TEST_DIR="testdir-${RANDOM}"
# shellcheck disable=SC2034
ALT_TEST_TEXT_FILE="test-s3fs-ALT-${RANDOM}.txt"
# shellcheck disable=SC2034
BIG_FILE="big-file-s3fs-${RANDOM}.txt"
"${t}" "${key_prefix}" && rc=$? || rc=$?
if [ $rc = 0 ]; then
report_pass "${t}"
else
report_fail $t
report_fail "${t}"
fi
set -o errexit
done
cd ${orig_dir}
cd "${orig_dir}"
clean_run_dir
for t in "${TEST_PASSED_LIST[@]}"; do
echo "PASS: $t"
echo "PASS: ${t}"
done
for t in "${TEST_FAILED_LIST[@]}"; do
echo "FAIL: $t"
echo "FAIL: ${t}"
done
passed=${#TEST_PASSED_LIST[@]}
failed=${#TEST_FAILED_LIST[@]}
local passed=${#TEST_PASSED_LIST[@]}
local failed=${#TEST_FAILED_LIST[@]}
echo "SUMMARY for $0: $passed tests passed. $failed tests failed."
echo "SUMMARY for $0: ${passed} tests passed. ${failed} tests failed."
if [[ $failed != 0 ]]; then
if [[ "${failed}" != 0 ]]; then
return 1
else
return 0
@ -275,7 +284,7 @@ function run_suite {
}
function get_ctime() {
if [ `uname` = "Darwin" ]; then
if [ "$(uname)" = "Darwin" ]; then
stat -f "%c" "$1"
else
stat -c "%Z" "$1"
@ -283,7 +292,7 @@ function get_ctime() {
}
function get_mtime() {
if [ `uname` = "Darwin" ]; then
if [ "$(uname)" = "Darwin" ]; then
stat -f "%m" "$1"
else
stat -c "%Y" "$1"
@ -291,7 +300,7 @@ function get_mtime() {
}
function get_atime() {
if [ `uname` = "Darwin" ]; then
if [ "$(uname)" = "Darwin" ]; then
stat -f "%a" "$1"
else
stat -c "%X" "$1"
@ -299,7 +308,7 @@ function get_atime() {
}
function get_permissions() {
if [ `uname` = "Darwin" ]; then
if [ "$(uname)" = "Darwin" ]; then
stat -f "%p" "$1"
else
stat -c "%a" "$1"
@ -307,7 +316,8 @@ function get_permissions() {
}
function check_content_type() {
INFO_STR=`aws_cli s3api head-object --bucket ${TEST_BUCKET_1} --key $1`
local INFO_STR
INFO_STR=$(aws_cli s3api head-object --bucket "${TEST_BUCKET_1}" --key "$1")
if [[ "${INFO_STR}" != *"$2"* ]]
then
echo "moved file content-type is not as expected expected:$2 got:${INFO_STR}"
@ -316,21 +326,23 @@ function check_content_type() {
}
function get_disk_avail_size() {
DISK_AVAIL_SIZE=`BLOCKSIZE=$((1024 * 1024)) df $1 | awk '{print $4}' | tail -n 1`
echo ${DISK_AVAIL_SIZE}
local DISK_AVAIL_SIZE
DISK_AVAIL_SIZE=$(BLOCKSIZE=$((1024 * 1024)) df "$1" | awk '{print $4}' | tail -n 1)
echo "${DISK_AVAIL_SIZE}"
}
function aws_cli() {
FLAGS=""
local FLAGS=""
if [ -n "${S3FS_PROFILE}" ]; then
FLAGS="--profile ${S3FS_PROFILE}"
fi
aws $* --endpoint-url "${S3_URL}" --no-verify-ssl $FLAGS
# shellcheck disable=SC2086,SC2068
aws $@ --endpoint-url "${S3_URL}" --ca-bundle /tmp/keystore.pem ${FLAGS}
}
function wait_for_port() {
PORT=$1
for i in $(seq 30); do
local PORT="$1"
for _ in $(seq 30); do
if exec 3<>"/dev/tcp/127.0.0.1/${PORT}";
then
exec 3<&- # Close for read
@ -343,12 +355,12 @@ function wait_for_port() {
function make_random_string() {
if [ -n "$1" ]; then
END_POS=$1
local END_POS="$1"
else
END_POS=8
local END_POS=8
fi
${BASE64_BIN} --wrap=0 < /dev/urandom | tr -d /+ | head -c ${END_POS}
"${BASE64_BIN}" --wrap=0 < /dev/urandom | tr -d /+ | head -c "${END_POS}"
return 0
}

View File

@ -1,4 +1,4 @@
#!/usr/bin/env python2
#!/usr/bin/env python3
#
# s3fs - FUSE-based file system backed by Amazon S3
#
@ -21,7 +21,6 @@
import os
import unittest
import ConfigParser
import random
import sys
import time
@ -42,7 +41,7 @@ class OssfsUnitTest(unittest.TestCase):
def test_read_file(self):
filename = "%s" % (self.random_string(10))
print filename
print(filename)
f = open(filename, 'w')
data = self.random_string(1000)
@ -61,7 +60,7 @@ class OssfsUnitTest(unittest.TestCase):
def test_rename_file(self):
filename1 = "%s" % (self.random_string(10))
filename2 = "%s" % (self.random_string(10))
print filename1, filename2
print(filename1, filename2)
f = open(filename1, 'w+')
data1 = self.random_string(1000)
@ -81,7 +80,7 @@ class OssfsUnitTest(unittest.TestCase):
def test_rename_file2(self):
filename1 = "%s" % (self.random_string(10))
filename2 = "%s" % (self.random_string(10))
print filename1, filename2
print(filename1, filename2)
f = open(filename1, 'w')
data1 = self.random_string(1000)
@ -104,7 +103,7 @@ class OssfsUnitTest(unittest.TestCase):
filename = "%s" % (self.random_string(10))
fd = os.open(filename, os.O_CREAT|os.O_RDWR)
try:
os.write(fd, 'a' * 42)
os.write(fd, b'a' * 42)
self.assertEqual(os.fstat(fd).st_size, 42)
os.ftruncate(fd, 100)
self.assertEqual(os.fstat(fd).st_size, 100)

266
test/write_multiblock.cc Normal file
View File

@ -0,0 +1,266 @@
/*
* s3fs - FUSE-based file system backed by Amazon S3
*
* Copyright(C) 2007 Randy Rizun <rrizun@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <cstdio>
#include <cstdlib>
#include <iostream>
#include <climits>
#include <string>
#include <list>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
//---------------------------------------------------------
// Structures and Typedefs
//---------------------------------------------------------
struct write_block_part
{
off_t start;
off_t size;
};
typedef std::list<write_block_part> wbpart_list_t;
typedef std::list<std::string> strlist_t;
//---------------------------------------------------------
// Const
//---------------------------------------------------------
const char usage_string[] = "Usage : \"write_multiblock -f <file path> -p <start offset:size>\" (allows -f and -p multiple times.)";
//---------------------------------------------------------
// Utility functions
//---------------------------------------------------------
static unsigned char* create_random_data(off_t size)
{
int fd;
if(-1 == (fd = open("/dev/urandom", O_RDONLY))){
std::cerr << "[ERROR] Could not open /dev/urandom" << std::endl;
return NULL;
}
unsigned char* pbuff;
if(NULL == (pbuff = reinterpret_cast<unsigned char*>(malloc(size)))){
std::cerr << "[ERROR] Could not allocate memory." << std::endl;
close(fd);
return NULL;
}
for(ssize_t readpos = 0, readcnt = 0; readpos < size; readpos += readcnt){
if(-1 == (readcnt = read(fd, &(pbuff[readpos]), static_cast<size_t>(size - readpos)))){
if(EAGAIN != errno && EWOULDBLOCK != errno && EINTR != errno){
std::cerr << "[ERROR] Failed reading from /dev/urandom with errno: " << errno << std::endl;
free(pbuff);
close(fd);
return NULL;
}
readcnt = 0;
}
}
return pbuff;
}
static off_t cvt_string_to_number(const char* pstr)
{
if(!pstr){
return -1;
}
errno = 0;
char* ptemp = NULL;
long long result = strtoll(pstr, &ptemp, 10);
if(!ptemp || ptemp == pstr || *ptemp != '\0'){
return -1;
}
if((result == LLONG_MIN || result == LLONG_MAX) && errno == ERANGE){
return -1;
}
return static_cast<off_t>(result);
}
static bool parse_string(const char* pstr, char delim, strlist_t& strlist)
{
if(!pstr){
return false;
}
std::string strAll(pstr);
while(!strAll.empty()){
size_t pos = strAll.find_first_of(delim);
if(std::string::npos != pos){
strlist.push_back(strAll.substr(0, pos));
strAll = strAll.substr(pos + 1);
}else{
strlist.push_back(strAll);
strAll.erase();
}
}
return true;
}
static bool parse_write_blocks(const char* pstr, wbpart_list_t& wbparts, off_t& max_size)
{
if(!pstr){
return false;
}
strlist_t partlist;
if(!parse_string(pstr, ',', partlist)){
return false;
}
for(strlist_t::const_iterator iter = partlist.begin(); iter != partlist.end(); ++iter){
strlist_t partpair;
if(parse_string(iter->c_str(), ':', partpair) && 2 == partpair.size()){
write_block_part tmp_part;
tmp_part.start = cvt_string_to_number(partpair.front().c_str());
partpair.pop_front();
tmp_part.size = cvt_string_to_number(partpair.front().c_str());
if(tmp_part.start < 0 || tmp_part.size <= 0){
std::cerr << "[ERROR] -p option parameter(" << pstr << ") is something wrong." << std::endl;
return false;
}
if(max_size < tmp_part.size){
max_size = tmp_part.size;
}
wbparts.push_back(tmp_part);
}else{
std::cerr << "[ERROR] -p option parameter(" << pstr << ") is something wrong." << std::endl;
return false;
}
}
return true;
}
static bool parse_arguments(int argc, char** argv, strlist_t& files, wbpart_list_t& wbparts, off_t& max_size)
{
if(argc < 2 || !argv){
std::cerr << "[ERROR] The -f option and -p option are required as arguments." << std::endl;
std::cerr << usage_string << std::endl;
return false;
}
files.clear();
wbparts.clear();
max_size = 0;
int opt;
while(-1 != (opt = getopt(argc, argv, "f:p:"))){
switch(opt){
case 'f':
files.push_back(std::string(optarg));
break;
case 'p':
if(!parse_write_blocks(optarg, wbparts, max_size)){
return false;
}
break;
default:
std::cerr << usage_string << std::endl;
return false;
}
}
if(files.empty() || wbparts.empty()){
std::cerr << "[ERROR] The -f option and -p option are required as arguments." << std::endl;
std::cerr << usage_string << std::endl;
return false;
}
return true;
}
//---------------------------------------------------------
// Main
//---------------------------------------------------------
int main(int argc, char** argv)
{
// parse arguments
strlist_t files;
wbpart_list_t wbparts;
off_t max_size = 0;
if(!parse_arguments(argc, argv, files, wbparts, max_size)){
exit(EXIT_FAILURE);
}
// make data and buffer
unsigned char* pData;
if(NULL == (pData = create_random_data(max_size))){
exit(EXIT_FAILURE);
}
for(strlist_t::const_iterator fiter = files.begin(); fiter != files.end(); ++fiter){
// open/create file
int fd;
struct stat st;
if(0 == stat(fiter->c_str(), &st)){
if(!S_ISREG(st.st_mode)){
std::cerr << "[ERROR] File " << fiter->c_str() << " is existed, but it is not regular file." << std::endl;
free(pData);
exit(EXIT_FAILURE);
}
if(-1 == (fd = open(fiter->c_str(), O_WRONLY))){
std::cerr << "[ERROR] Could not open " << fiter->c_str() << std::endl;
free(pData);
exit(EXIT_FAILURE);
}
}else{
if(-1 == (fd = open(fiter->c_str(), O_WRONLY | O_CREAT | O_TRUNC, 0644))){
std::cerr << "[ERROR] Could not create " << fiter->c_str() << std::endl;
free(pData);
exit(EXIT_FAILURE);
}
}
// write blocks
for(wbpart_list_t::const_iterator piter = wbparts.begin(); piter != wbparts.end(); ++piter){
// write one block
for(ssize_t writepos = 0, writecnt = 0; writepos < piter->size; writepos += writecnt){
if(-1 == (writecnt = pwrite(fd, &(pData[writepos]), static_cast<size_t>(piter->size - writepos), (piter->start + writepos)))){
if(EAGAIN != errno && EWOULDBLOCK != errno && EINTR != errno){
std::cerr << "[ERROR] Failed writing to " << fiter->c_str() << " by errno : " << errno << std::endl;
close(fd);
free(pData);
exit(EXIT_FAILURE);
}
writecnt = 0;
}
}
}
// close file
close(fd);
}
free(pData);
exit(EXIT_SUCCESS);
}
/*
* Local variables:
* tab-width: 4
* c-basic-offset: 4
* End:
* vim600: expandtab sw=4 ts=4 fdm=marker
* vim<600: expandtab sw=4 ts=4
*/

View File

@ -1,46 +0,0 @@
#!/usr/bin/env python2
#
# s3fs - FUSE-based file system backed by Amazon S3
#
# Copyright 2007-2008 Randy Rizun <rrizun@gmail.com>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
import os
import sys
if len(sys.argv) < 4 or len(sys.argv) % 2 != 0:
sys.exit("Usage: %s OUTFILE OFFSET_1 SIZE_1 [OFFSET_N SIZE_N]...")
filename = sys.argv[1]
fd = os.open(filename, os.O_CREAT | os.O_TRUNC | os.O_WRONLY)
try:
for i in range(2, len(sys.argv), 2):
data = "a" * int(sys.argv[i+1])
os.lseek(fd, int(sys.argv[i]), os.SEEK_SET)
os.write(fd, data)
finally:
os.close(fd)
#
# Local variables:
# tab-width: 4
# c-basic-offset: 4
# End:
# vim600: expandtab sw=4 ts=4 fdm=marker
# vim<600: expandtab sw=4 ts=4
#