ceph/examples/rgw-cache/nginx.conf

58 lines
1.8 KiB
Nginx Configuration File
Raw Normal View History

rgw: Adding data cache and CDN capabilities This feature is meant to add data cache feature to the RGW. It is using Nginx as a cache server. This feature adds 2 new apis, Auth api and Cache api. Some Performance tests using hsbench: 16K objs: RGW direct access: Mode: GET, Ops: 3001, MB/s: 46.89, Lat(ms): [ min: 30.4, avg: 33.2, 99%: 34.7, max: 35.2 ] Nginx access (objs have not been cached) Mode: GET, Ops: 1363, MB/s: 21.30, Lat(ms): [ min: 63.8, avg: 73.8, 99%: 78.1, max: 86.6 ] Nginx access (objs have been cached) Mode: GET, Ops: 2446, MB/s: 38.22, Lat(ms): [ min: 36.9, avg: 41.0, 99%: 43.9, max: 45.9 ] 512K objs: RGW direct access: Mode: GET, Ops: 1492, MB/s: 746.00 Lat(ms): [ min: 60.4, avg: 66.7, 99%: 73.5, max: 75.9 ] Nginx access (objs have not been cached) Mode: GET, Ops: 1382, MB/s: 691.00, Lat(ms): [ min: 64.5, avg: 72.1, 99%: 77.9, max: 82.8 ] Nginx access (objs have been cached) Mode: GET, Ops: 2947, MB/s: 1473.50, Lat(ms): [ min: 3.3, avg: 32.7, 99%: 62.2, max: 72.1 ] 2M objs: RGW direct access: Mode: GET, Ops: 613, MB/s: 1226.00, Lat(ms): [ min: 143.6, avg: 162.0, 99%: 180.2, max: 190.1 ] Nginx access (objs have not been cached) Mode: GET, Ops: 462, MB/s: 924.00, Lat(ms): [ min: 180.2, avg: 215.0, 99%: 243.2, max: 248.3 ] Nginx access (objs have been cached) Mode: GET, Ops: 1392, MB/s: 2784.00, Lat(ms): [ min: 3.0, avg: 5.3, 99%: 18.8, max: 30.2 ] 10M objs: RGW direct access: Mode: GET, Ops: 135, MB/s: 1350.00, Lat(ms): [ min: 191.1, avg: 265.8, 99%: 373.1, max: 382.8 ] Nginx access (objs have not been cached) Mode: GET, Ops: 120, MB/s: 1200.00, Lat(ms): [ min: 302.1, avg: 428.8, 99%: 561.2, max: 583.7 ] Nginx access (objs have been cached) Mode: GET, Ops: 281, MB/s: 2810.00, Lat(ms): [ min: 3.2, avg: 8.3, 99%: 16.9, max: 25.6 ] gdal_translate 4GiB image gdal_translate -co NUM_THREADS=ALL_CPUS /vsis3/hello/sat.tif Nginx (have not cached): real 0m24.714s user 0m8.692s sys 0m10.360s Nginx (have been cached): real 0m21.070s user 0m9.140s sys 0m10.316s RGW: real 0m21.859s user 0m8.850s sys 0m10.386s The results are showing that for objects larger than 512K the cache will increase the performance by twice or more. For small objs, the overhead of sending the auth request will make the cache less efficient The result for cached objects in the 10MB test can be explained by net limit of 25 Gb/s(it could reach more) In Gdal (image decoder/encoder over s3 using range requests) the results were not that different because of Gdal single cpu encoding/decoding. Gdal have been chosen because of the ability to check the smart cache of the nginx. https://www.nginx.com/blog/smart-efficient-byte-range-caching-nginx/ Signed-off-by: Or Friedmann <ofriedma@redhat.com>
2020-02-03 10:36:10 +00:00
user nginx;
#Process per core
worker_processes auto;
pid /var/run/nginx.pid;
events {
#Number of connections per worker
worker_connections 1024;
}
http {
types_hash_max_size 4096;
rgw: Adding data cache and CDN capabilities This feature is meant to add data cache feature to the RGW. It is using Nginx as a cache server. This feature adds 2 new apis, Auth api and Cache api. Some Performance tests using hsbench: 16K objs: RGW direct access: Mode: GET, Ops: 3001, MB/s: 46.89, Lat(ms): [ min: 30.4, avg: 33.2, 99%: 34.7, max: 35.2 ] Nginx access (objs have not been cached) Mode: GET, Ops: 1363, MB/s: 21.30, Lat(ms): [ min: 63.8, avg: 73.8, 99%: 78.1, max: 86.6 ] Nginx access (objs have been cached) Mode: GET, Ops: 2446, MB/s: 38.22, Lat(ms): [ min: 36.9, avg: 41.0, 99%: 43.9, max: 45.9 ] 512K objs: RGW direct access: Mode: GET, Ops: 1492, MB/s: 746.00 Lat(ms): [ min: 60.4, avg: 66.7, 99%: 73.5, max: 75.9 ] Nginx access (objs have not been cached) Mode: GET, Ops: 1382, MB/s: 691.00, Lat(ms): [ min: 64.5, avg: 72.1, 99%: 77.9, max: 82.8 ] Nginx access (objs have been cached) Mode: GET, Ops: 2947, MB/s: 1473.50, Lat(ms): [ min: 3.3, avg: 32.7, 99%: 62.2, max: 72.1 ] 2M objs: RGW direct access: Mode: GET, Ops: 613, MB/s: 1226.00, Lat(ms): [ min: 143.6, avg: 162.0, 99%: 180.2, max: 190.1 ] Nginx access (objs have not been cached) Mode: GET, Ops: 462, MB/s: 924.00, Lat(ms): [ min: 180.2, avg: 215.0, 99%: 243.2, max: 248.3 ] Nginx access (objs have been cached) Mode: GET, Ops: 1392, MB/s: 2784.00, Lat(ms): [ min: 3.0, avg: 5.3, 99%: 18.8, max: 30.2 ] 10M objs: RGW direct access: Mode: GET, Ops: 135, MB/s: 1350.00, Lat(ms): [ min: 191.1, avg: 265.8, 99%: 373.1, max: 382.8 ] Nginx access (objs have not been cached) Mode: GET, Ops: 120, MB/s: 1200.00, Lat(ms): [ min: 302.1, avg: 428.8, 99%: 561.2, max: 583.7 ] Nginx access (objs have been cached) Mode: GET, Ops: 281, MB/s: 2810.00, Lat(ms): [ min: 3.2, avg: 8.3, 99%: 16.9, max: 25.6 ] gdal_translate 4GiB image gdal_translate -co NUM_THREADS=ALL_CPUS /vsis3/hello/sat.tif Nginx (have not cached): real 0m24.714s user 0m8.692s sys 0m10.360s Nginx (have been cached): real 0m21.070s user 0m9.140s sys 0m10.316s RGW: real 0m21.859s user 0m8.850s sys 0m10.386s The results are showing that for objects larger than 512K the cache will increase the performance by twice or more. For small objs, the overhead of sending the auth request will make the cache less efficient The result for cached objects in the 10MB test can be explained by net limit of 25 Gb/s(it could reach more) In Gdal (image decoder/encoder over s3 using range requests) the results were not that different because of Gdal single cpu encoding/decoding. Gdal have been chosen because of the ability to check the smart cache of the nginx. https://www.nginx.com/blog/smart-efficient-byte-range-caching-nginx/ Signed-off-by: Or Friedmann <ofriedma@redhat.com>
2020-02-03 10:36:10 +00:00
lua_package_path '/usr/local/openresty/lualib/?.lua;;';
aws_auth $aws_token {
# access key and secret key of the cache
# Please substitute with the access key and secret key of the amz-cache cap user
rgw: Adding data cache and CDN capabilities This feature is meant to add data cache feature to the RGW. It is using Nginx as a cache server. This feature adds 2 new apis, Auth api and Cache api. Some Performance tests using hsbench: 16K objs: RGW direct access: Mode: GET, Ops: 3001, MB/s: 46.89, Lat(ms): [ min: 30.4, avg: 33.2, 99%: 34.7, max: 35.2 ] Nginx access (objs have not been cached) Mode: GET, Ops: 1363, MB/s: 21.30, Lat(ms): [ min: 63.8, avg: 73.8, 99%: 78.1, max: 86.6 ] Nginx access (objs have been cached) Mode: GET, Ops: 2446, MB/s: 38.22, Lat(ms): [ min: 36.9, avg: 41.0, 99%: 43.9, max: 45.9 ] 512K objs: RGW direct access: Mode: GET, Ops: 1492, MB/s: 746.00 Lat(ms): [ min: 60.4, avg: 66.7, 99%: 73.5, max: 75.9 ] Nginx access (objs have not been cached) Mode: GET, Ops: 1382, MB/s: 691.00, Lat(ms): [ min: 64.5, avg: 72.1, 99%: 77.9, max: 82.8 ] Nginx access (objs have been cached) Mode: GET, Ops: 2947, MB/s: 1473.50, Lat(ms): [ min: 3.3, avg: 32.7, 99%: 62.2, max: 72.1 ] 2M objs: RGW direct access: Mode: GET, Ops: 613, MB/s: 1226.00, Lat(ms): [ min: 143.6, avg: 162.0, 99%: 180.2, max: 190.1 ] Nginx access (objs have not been cached) Mode: GET, Ops: 462, MB/s: 924.00, Lat(ms): [ min: 180.2, avg: 215.0, 99%: 243.2, max: 248.3 ] Nginx access (objs have been cached) Mode: GET, Ops: 1392, MB/s: 2784.00, Lat(ms): [ min: 3.0, avg: 5.3, 99%: 18.8, max: 30.2 ] 10M objs: RGW direct access: Mode: GET, Ops: 135, MB/s: 1350.00, Lat(ms): [ min: 191.1, avg: 265.8, 99%: 373.1, max: 382.8 ] Nginx access (objs have not been cached) Mode: GET, Ops: 120, MB/s: 1200.00, Lat(ms): [ min: 302.1, avg: 428.8, 99%: 561.2, max: 583.7 ] Nginx access (objs have been cached) Mode: GET, Ops: 281, MB/s: 2810.00, Lat(ms): [ min: 3.2, avg: 8.3, 99%: 16.9, max: 25.6 ] gdal_translate 4GiB image gdal_translate -co NUM_THREADS=ALL_CPUS /vsis3/hello/sat.tif Nginx (have not cached): real 0m24.714s user 0m8.692s sys 0m10.360s Nginx (have been cached): real 0m21.070s user 0m9.140s sys 0m10.316s RGW: real 0m21.859s user 0m8.850s sys 0m10.386s The results are showing that for objects larger than 512K the cache will increase the performance by twice or more. For small objs, the overhead of sending the auth request will make the cache less efficient The result for cached objects in the 10MB test can be explained by net limit of 25 Gb/s(it could reach more) In Gdal (image decoder/encoder over s3 using range requests) the results were not that different because of Gdal single cpu encoding/decoding. Gdal have been chosen because of the ability to check the smart cache of the nginx. https://www.nginx.com/blog/smart-efficient-byte-range-caching-nginx/ Signed-off-by: Or Friedmann <ofriedma@redhat.com>
2020-02-03 10:36:10 +00:00
access_key cache;
secret_key cache;
service s3;
region us-east-1;
}
# This map is used to choose the original authorization header if the aws_auth module refuse to create one
map $aws_token $awsauth {
default $http_authorization;
~. $aws_token; # Regular expression to match any value
}
map $request_uri $awsauthtwo {
"/" $http_authorization;
"~\?" $http_authorization;
default $awsauth;
}
map $request_method $awsauththree {
default $awsauthtwo;
"PUT" $http_authorization;
"HEAD" $http_authorization;
"POST" $http_authorization;
"DELETE" $http_authorization;
"COPY" $http_authorization;
}
map $http_if_match $awsauthfour {
~. $http_authorization; # Regular expression to match any value
default $awsauththree;
}
rgw: Adding data cache and CDN capabilities This feature is meant to add data cache feature to the RGW. It is using Nginx as a cache server. This feature adds 2 new apis, Auth api and Cache api. Some Performance tests using hsbench: 16K objs: RGW direct access: Mode: GET, Ops: 3001, MB/s: 46.89, Lat(ms): [ min: 30.4, avg: 33.2, 99%: 34.7, max: 35.2 ] Nginx access (objs have not been cached) Mode: GET, Ops: 1363, MB/s: 21.30, Lat(ms): [ min: 63.8, avg: 73.8, 99%: 78.1, max: 86.6 ] Nginx access (objs have been cached) Mode: GET, Ops: 2446, MB/s: 38.22, Lat(ms): [ min: 36.9, avg: 41.0, 99%: 43.9, max: 45.9 ] 512K objs: RGW direct access: Mode: GET, Ops: 1492, MB/s: 746.00 Lat(ms): [ min: 60.4, avg: 66.7, 99%: 73.5, max: 75.9 ] Nginx access (objs have not been cached) Mode: GET, Ops: 1382, MB/s: 691.00, Lat(ms): [ min: 64.5, avg: 72.1, 99%: 77.9, max: 82.8 ] Nginx access (objs have been cached) Mode: GET, Ops: 2947, MB/s: 1473.50, Lat(ms): [ min: 3.3, avg: 32.7, 99%: 62.2, max: 72.1 ] 2M objs: RGW direct access: Mode: GET, Ops: 613, MB/s: 1226.00, Lat(ms): [ min: 143.6, avg: 162.0, 99%: 180.2, max: 190.1 ] Nginx access (objs have not been cached) Mode: GET, Ops: 462, MB/s: 924.00, Lat(ms): [ min: 180.2, avg: 215.0, 99%: 243.2, max: 248.3 ] Nginx access (objs have been cached) Mode: GET, Ops: 1392, MB/s: 2784.00, Lat(ms): [ min: 3.0, avg: 5.3, 99%: 18.8, max: 30.2 ] 10M objs: RGW direct access: Mode: GET, Ops: 135, MB/s: 1350.00, Lat(ms): [ min: 191.1, avg: 265.8, 99%: 373.1, max: 382.8 ] Nginx access (objs have not been cached) Mode: GET, Ops: 120, MB/s: 1200.00, Lat(ms): [ min: 302.1, avg: 428.8, 99%: 561.2, max: 583.7 ] Nginx access (objs have been cached) Mode: GET, Ops: 281, MB/s: 2810.00, Lat(ms): [ min: 3.2, avg: 8.3, 99%: 16.9, max: 25.6 ] gdal_translate 4GiB image gdal_translate -co NUM_THREADS=ALL_CPUS /vsis3/hello/sat.tif Nginx (have not cached): real 0m24.714s user 0m8.692s sys 0m10.360s Nginx (have been cached): real 0m21.070s user 0m9.140s sys 0m10.316s RGW: real 0m21.859s user 0m8.850s sys 0m10.386s The results are showing that for objects larger than 512K the cache will increase the performance by twice or more. For small objs, the overhead of sending the auth request will make the cache less efficient The result for cached objects in the 10MB test can be explained by net limit of 25 Gb/s(it could reach more) In Gdal (image decoder/encoder over s3 using range requests) the results were not that different because of Gdal single cpu encoding/decoding. Gdal have been chosen because of the ability to check the smart cache of the nginx. https://www.nginx.com/blog/smart-efficient-byte-range-caching-nginx/ Signed-off-by: Or Friedmann <ofriedma@redhat.com>
2020-02-03 10:36:10 +00:00
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nodelay on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
}