Skip to content

Proxy node configuration

Cristi Pufu edited this page Nov 17, 2015 · 3 revisions
  • Install prerequisites:
apt-get -y install swift-proxy memcached

  • Configure memcache to listen on default interface and restart the service:
export ADDRESS=`ifconfig eth0 | grep inet\ addr | cut -d":" -f2 | cut -d" " -f1`
perl -pi -e "s/-l 127.0.0.1/-l $ADDRESS/" /etc/memcached.conf
service memcached restart

  • Create the swift-proxy-server service configuration file /etc/swift/proxy-server.conf:
cat >/etc/swift/proxy-server.conf <<EOF
[DEFAULT]
cert_file = /etc/swift/cert.crt
key_file = /etc/swift/cert.key
bind_port = 8080
workers = 8
user = swift

[pipeline:main]
pipeline = healthcheck proxy-logging cache bulk slo dlo tempauth proxy-logging proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true

[filter:proxy-logging]
use = egg:swift#proxy_logging

[filter:tempauth]
use = egg:swift#tempauth
user_system_root = testpass .admin 

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:bulk]
use = egg:swift#bulk

[filter:slo]
use = egg:swift#slo

[filter:dlo]
use = egg:swift#dlo

[filter:cache]
use = egg:swift#memcache
memcache_servers = $ADDRESS:11211
EOF

The [filter:tempauth] section describes the account users and password credentials for the tempauth identity service. You can specify multiple accounts, and for each account multiple users with different roles and proxy urls in the following format:

user_account_username = password .role proxy_url/loadbalancer_url

For example:

[filter:tempauth]
use = egg:swift#tempauth
user_project1_user1 = password1 .reseller_admin
user_project1_user2 = password2 .admin
user_project2_user1 = password3 http://192.168.85.129:8080/v1/AUTH_project2

Read more about it or about how to configure swift to use keystone, here and here

The [filter:cache] section describes the memcache servers, so if you have multiple proxy servers, you can specify them in the following format:

[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.85.128:11211,192.168.85.129:11211

  • Create the ring files using the swift-ring-builder utility (this step applies only for the first proxy node, because you need the same resulting files on all other nodes):
swift-ring-builder /etc/swift/account.builder create $PARTPOWER $REPLICAS $MINHOURS
swift-ring-builder /etc/swift/container.builder create $PARTPOWER $REPLICAS $MINHOURS
swift-ring-builder /etc/swift/object.builder create $PARTPOWER $REPLICAS $MINHOURS

The $PARTPOWER parameter represents 2^$PARTPOWER value that the partition will be sized to. You have to choose this value based on the total amount of storage you expect your cluster to use. In order to do that, you can use the swift partition power calculator.

The $REPLICAS parameter represents the number of replicas you expect each object to have (how many times a file is replicated inside the cluster). The recommended value is 3.

The $MINHOURS parameter represents the minimum number of hours to restrict moving a partition more than once.

You can read this for more information about building the rings.

  • Now, for every storage device on each storage node you have to add an entry to the ring file like this:
export ZONE=                    # set the zone number for that storage device
export STORAGE_LOCAL_NET_IP=    # and the IP address
export WEIGHT=100               # relative weight (higher for bigger/faster disks)
export DEVICE=sdb               # relative path to the /srv/node/ storage main mounting point
swift-ring-builder /etc/swift/account.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6002/$DEVICE $WEIGHT
swift-ring-builder /etc/swift/container.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6001/$DEVICE $WEIGHT
swift-ring-builder /etc/swift/object.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6000/$DEVICE $WEIGHT

Choose the zones wisely, based on the physical locations, network separations or any other attribute that would lessen multiple replicas being unavailable in the same time, as data is never replicated to another disk or server in the same zone.

Also, choose the device $WEIGHT based on the disk size and speed, as that parameter will let swift know where and how to balance your data across all devices. For example, if you encounter a drive failure, you will set the device weight to 0 before replacing the disk with a new one. More info about managing the rings and handling drive failures here .

  • Check the ring contents:
swift-ring-builder /etc/swift/account.builder
swift-ring-builder /etc/swift/container.builder
swift-ring-builder /etc/swift/object.builder
  • Rebalance the rings:
swift-ring-builder /etc/swift/account.builder rebalance
swift-ring-builder /etc/swift/container.builder rebalance
swift-ring-builder /etc/swift/object.builder rebalance

  • Copy the resulting ring files (*.ring.gz) to all other nodes (proxy and storage) and make sure the swift user owns them:
scp <ip_address_proxy_node_where_ring_resides>:/etc/swift/*.ring.gz /etc/swift/
chown -R swift:swift /etc/swift

  • Start the proxy service:
swift-init proxy start