网站建设资讯

NEWS

网站建设资讯

Docker安装Redis单机/集群 -- Linux

提高Docker的下载速度

  • 修改 /etc/docker/daemon.json 文件
vi /etc/docker/daemon.json 
  • 配置加速器
{
  "registry-mirrors": ["https://registry.docker-cn.com",
    "http://hub-mirror.c.163.com" ,
    "https://kfwkfulq.mirror.aliyuncs.com"
  ]
} 
  • 重启docker
service docker restart

1、取最新版的 Redis 镜像

docker pull redis:latest

2、查看本地镜像

  • 使用以下命令来查看是否已安装了 redis:
docker images

3、Docker挂载配置文件

接下来就是要将redis 的配置文件进行挂载,以配置文件方式启动redis 容器。(挂载:即将宿主的文件和容器内部目录相关联,相互绑定,在宿主机内修改文件的话也随之修改容器内部文件)

站在用户的角度思考问题,与客户深入沟通,找到灵山网站设计与灵山网站推广的解决方案,凭借多年的经验,让设计与互联网技术结合,创造个性化、用户体验好的作品,建站类型包括:成都网站设计、成都网站建设、企业官网、英文网站、手机端网站、网站推广、申请域名、网络空间、企业邮箱。业务覆盖灵山地区。

  1. 挂载redis的配置文件
  • liunx 下redis.conf文件位置: /home/redis/myredis/myredis.conf
  1. 挂载redis 的持久化文件(为了数据的持久化)。
  • liunx 下redis的data文件位置 : /home/redis/myredis/data

配置文件可以自定义

  • 创建目录
# 不存在就直接创建/home/redis/myredis 文件夹
mkdir -p /home/redis/myredis 

4、配置文件

# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#bind 127.0.0.1

protected-mode no

port 6379

tcp-backlog 511

requirepass 000415

timeout 0

tcp-keepalive 300

daemonize no

supervised no

pidfile /var/run/redis_6379.pid

loglevel notice

logfile ""

databases 1

always-show-logo yes

save 900 1
save 300 10
save 60 10000

stop-writes-on-bgsave-error yes

rdbcompression yes

rdbchecksum yes

dbfilename dump.rdb

dir ./

replica-serve-stale-data yes

replica-read-only yes

repl-diskless-sync no

repl-disable-tcp-nodelay no

replica-priority 100

lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no

appendonly yes

appendfilename "appendonly.aof"

# 表示每执行一次写命令,立即记录到AOF文件
# appendfsync always
# 写命令执行完先放入A0F缓冲区,然后表示每隔1秒将缓冲区数据写到AOF文件,是默认方案
appendfsync everysec
# 写命令执行完先放入A0F缓冲区,由操作系统决定何时将缓冲区内容写回磁盘
# appendfsync no

no-appendfsync-on-rewrite no

# A0F文件比上次文件增长超过多少百分比则触发重写
auto-aof-rewrite-percentage 100
# AOF文件体积最小多大以上才触发重写
auto-aof-rewrite-min-size 64mb

aof-load-truncated yes

aof-use-rdb-preamble yes

lua-time-limit 5000

slowlog-max-len 128

notify-keyspace-events ""

hash-max-ziplist-entries 512
hash-max-ziplist-value 64

list-max-ziplist-size -2

list-compress-depth 0

set-max-intset-entries 512

zset-max-ziplist-entries 128
zset-max-ziplist-value 64

hll-sparse-max-bytes 3000

stream-node-max-bytes 4096
stream-node-max-entries 100

activerehashing yes

hz 10

dynamic-hz yes

aof-rewrite-incremental-fsync yes

rdb-save-incremental-fsync yes

5、启动redis 容器

docker run --restart=always --log-opt max-size=100m --log-opt max-file=2 -p 6379:6379 --name myredis -v /home/redis/myredis/myredis.conf:/etc/redis/redis.conf -v /home/redis/myredis/data:/data -d redis redis-server /etc/redis/redis.conf  --appendonly yes  --requirepass qifeng
–restart=always 总是开机启动
–log是日志方面的
-p 6379:6379 将6379端口挂载出去
–name 给这个容器取一个名字
-v 数据卷挂载
/home/redis/myredis/myredis.conf:/etc/redis/redis.conf 这里是将 liunx 路径下的myredis.conf 和redis下的redis.conf 挂载在一起。
/home/redis/myredis/data:/data 这个同上
-d redis 表示后台启动redis
redis-server /etc/redis/redis.conf 以配置文件启动redis,加载容器内的conf文件,最终找到的是挂载的目录 /etc/redis/redis.conf 也就是liunx下的/home/redis/myredis/myredis.conf
–appendonly yes 开启redis 持久化
–requirepass qifeng 设置密码 

6、测试

6.1、通过docker ps指令查看启动状态

docker ps -a |grep myredis # 通过docker ps指令查看启动状态,是否成功.

6.2、查看容器运行日志

# 命令:docker logs --since 30m <容器名>
# 此处 --since 30m 是查看此容器30分钟之内的日志情况。
docker logs --since 30m myredis

6.3、容器内部连接进行测试

# 命令:docker exec -it <容器名> /bin/bash
# 此处跟着的redis-cli是直接将命令输在上面了。
docker exec -it myredis redis-cli
  • 直接获取数据会提示需要认证
  • (error) NOAUTH Authentication required.

  • 认证密码
auth 密码

  • 查看当前redis有没有设置密码
config get requirepass

7、删除Redis 容器

7.1、查看所有在运行的容器

docker ps -a
# 停止运行的Redis
docker stop myredis

7.2、删除redis 容器

docker rm myredis

7.3、删除Redis镜像

7.3.1、查看全部镜像

docker images

7.3.2、删除镜像

docker rmi 739b59b96069

8、分片集群-Docker安装

8.1、配置

  • 创建网卡
docker network create redis-net

  • 创建配置文件:redis-cluster.tmpl
cd /home/redis/myredis/
vi redis-cluster.tmpl
  • 配置文件内容
#port(端口号)
port ${PORT}
#masterauth(设置集群节点间访问密码,跟下面一致)
masterauth qifeng
#requirepass(设置redis访问密码)
requirepass qifeng
#cluster-enabled yes(启动集群模式)
cluster-enabled yes
#cluster-config-file nodes.conf(集群节点信息文件)
cluster-config-file nodes.conf
#cluster-node-timeout 5000(redis节点宕机被发现的时间)
cluster-node-timeout 5000
#cluster-announce-ip(集群节点的汇报ip,防止nat,预先填写为网关ip后续需要手动修改配置文件)
cluster-announce-ip 172.16.156.139
#cluster-announce-port(集群节点的汇报port,防止nat)
cluster-announce-port ${PORT}
#cluster-announce-bus-port(集群节点的汇报bus-port,防止nat)
cluster-announce-bus-port 1${PORT}
#appendonly yes(开启aof)  
appendonly yes

appendfilename "appendonly.aof"
# 表示每执行一次写命令,立即记录到AOF文件
# appendfsync always
# 写命令执行完先放入A0F缓冲区,然后表示每隔1秒将缓冲区数据写到AOF文件,是默认方案
appendfsync everysec
# 写命令执行完先放入A0F缓冲区,由操作系统决定何时将缓冲区内容写回磁盘
# appendfsync no

no-appendfsync-on-rewrite no

# A0F文件比上次文件增长超过多少百分比则触发重写
auto-aof-rewrite-percentage 100
# AOF文件体积最小多大以上才触发重写
auto-aof-rewrite-min-size 64mb

# 保护模式
protected-mode no
# 数据库数量
databases 1
# 日志
logfile /home/redis/myredis/${PORT}/run.log
  • 配置文件脚本
vi mkdirConfig.sh
# 内容如下
for port in `seq 6379 6384`; do \
  mkdir -p ${port}/conf \
  && PORT=${port} envsubst < redis-cluster.tmpl > ${port}/conf/redis.conf \
  && mkdir -p ${port}/data;\
done

8.2、启动

  • 启动文件脚本
vi redisStart.sh
for port in $(seq 6379 6384); do \
  docker run -di -p ${port}:${port} -p 1${port}:1${port} \
  --restart always --log-opt max-size=100m --log-opt max-file=2 \
	--name redis-${port} --net redis-net \
  -v /home/redis/myredis/${port}/conf/redis.conf:/usr/local/etc/redis/redis.conf \
  -v /home/redis/myredis/${port}/data:/data \
  redis redis-server /usr/local/etc/redis/redis.conf; \
done
  • 执行 脚本
chmod +x *.sh
./mkdirConfig.sh
./redisStart.sh

8.3、创建集群

  • 选择其中一个进入
docker exec -it redis-6384 /bin/bash
  • 执行集群创建命令
redis-cli --cluster create 172.16.156.139:6379 172.16.156.139:6380 172.16.156.139:6381 172.16.156.139:6382 172.16.156.139:6383 172.16.156.139:6384 --cluster-replicas 1 -a qifeng

8.4、查看集群状态

# 进入容器
docker exec -it redis-6384 /bin/bash
# 使用 IP
redis-cli -a qifeng --cluster check 172.16.156.139:6382
# 使用容器名称
redis-cli -a qifeng --cluster check redis-6382:6382

8.5、查看集群信息和节点信息

# 进入容器
docker exec -it redis-6384 /bin/bash
# 连接至集群某个节点
redis-cli -c -a qifeng -h redis-6383 -p 6383
# 查看集群信息
cluster info
# 查看集群结点信息
cluster nodes
# 查看状态
info replication



8.6、新增一个节点7004

  • 向集群中添加一个新的master节点7004,并向其中存储 num = 10

8.6.1、添加配置

cd /home/redis/myredis/
mkdir -p 7004/conf
mkdir -p 7004/data
cd 7004/conf
vi redis.conf
# 内容如下
#port(端口号)
port 7004
#masterauth(设置集群节点间访问密码,跟下面一致)
masterauth qifeng
#requirepass(设置redis访问密码)
requirepass qifeng
#cluster-enabled yes(启动集群模式)
cluster-enabled yes
#cluster-config-file nodes.conf(集群节点信息文件)
cluster-config-file nodes.conf
#cluster-node-timeout 5000(redis节点宕机被发现的时间)
cluster-node-timeout 5000
#cluster-announce-ip(集群节点的汇报ip,防止nat,预先填写为网关ip后续需要手动修改配置文件)
cluster-announce-ip 172.16.156.139
#cluster-announce-port(集群节点的汇报port,防止nat)
cluster-announce-port 7004
#cluster-announce-bus-port(集群节点的汇报bus-port,防止nat)
cluster-announce-bus-port 17004
#appendonly yes(开启aof)
appendonly yes

appendfilename "appendonly.aof"
# 表示每执行一次写命令,立即记录到AOF文件
# appendfsync always
# 写命令执行完先放入A0F缓冲区,然后表示每隔1秒将缓冲区数据写到AOF文件,是默认方案
appendfsync everysec
# 写命令执行完先放入A0F缓冲区,由操作系统决定何时将缓冲区内容写回磁盘
# appendfsync no

no-appendfsync-on-rewrite no

# A0F文件比上次文件增长超过多少百分比则触发重写
auto-aof-rewrite-percentage 100
# AOF文件体积最小多大以上才触发重写
auto-aof-rewrite-min-size 64mb

# 保护模式
protected-mode no
# 数据库数量
databases 1
# 日志
logfile /home/redis/myredis/7004/run.log

8.6.2、启动7004节点

 docker run -di -p 7004:7004 -p 17004:17004 --restart always --log-opt max-size=100m --log-opt max-file=2 --name redis-7004 --net redis-net -v /home/redis/myredis/7004/conf/redis.conf:/usr/local/etc/redis/redis.conf -v /home/redis/myredis/7004/data:/data redis redis-server /usr/local/etc/redis/redis.conf

8.6.3、7004加入集群

docker exec -it redis-6384 /bin/bash
redis-cli --cluster add-node 172.16.156.139:7004 172.16.156.139:6381 -a qifeng

8.6.4、查看集群

docker exec -it redis-6384 /bin/bash
redis-cli -c -a qifeng -h redis-6383 -p 6383
cluster nodes

  • 可以看到7004并没有分配插槽

8.6.5、移动插槽

  • 查看num分配的插槽
docker exec -it redis-6384 /bin/bash
redis-cli -c -a qifeng -h redis-6383 -p 6383
set num 10

  • num分配的插槽为2765,在6379上
  • 开始移动插槽,给分配7004分配插槽,从6379上分配300个包含了2765
  • 重新分配6379的插槽

redis-cli --cluster reshard 172.16.156.139:6379 -a qifeng

  • 确定移动插槽

8.6.6、验证

redis-cli -c -a qifeng -h redis-6383 -p 6383
get num

8.6.7、移除7004

  • 将插槽移回6379,重新分配7004的插槽

  • 移除7004节点

redis-cli --cluster del-node 172.16.156.139:7004 48763ae603dc6844dcae9381b68171182184502f -a qifeng

9、哨兵集群-Linux 源码安装

9.1、单机安装

1、安装依赖

yum install -y gcc tcl

2、下载源码编译

  • 下载地址:http://redis.io/download,下载最新稳定版本。
cd /home/redis
wget https://github.com/redis/redis/archive/7.0.5.tar.gz
tar xzf 7.0.5.tar.gz
cd redis-7.0.5/
make && make install

  • 执行完 make 命令后,redis-7.0.5 的 src 目录下会出现编译后的 redis 服务程序 redis-server,还有用于测试的客户端程序 redis-cli

3、修改redis.conf文件中的一些配置

# 绑定地址,默认是127.0.0.1,会导致只能在本地访问。修改为0.0.0.0则可以在任意IP访问
bind 0.0.0.0
# 保护模式,关闭保护模式
protected-mode no
# 数据库数量,设置为1
databases 1

4、启动 redis 服务

cd src
./redis-server ../redis.conf

9.2、搭建主从集群

1、创建目录

cd /home/redis
mkdir 7001 7002 7003

2、恢复原始配置

# 开启RDB
# save ""
save 3600 1
save 300 100
save 60 10000

# 关闭AOF
appendonly no

3、拷贝配置文件到每个实例目录

  • 然后将redis-7.0.5/redis.conf文件拷贝到三个目录中
cd /home/redis
# 方式一:逐个拷贝
cp redis-7.0.5/redis.conf 7001
cp redis-7.0.5/redis.conf 7002
cp redis-7.0.5/redis.conf 7003

# 方式二:管道组合命令,一键拷贝
echo 7001 7002 7003 | xargs -t -n 1 cp redis-7.0.5/redis.conf

4、修改每个实例的端口、工作目录

sed -i -e 's/6379/7001/g' -e 's/dir .\//dir \/home\/redis\/7001\//g' 7001/redis.conf
sed -i -e 's/6379/7002/g' -e 's/dir .\//dir \/home\/redis\/7002\//g' 7002/redis.conf
sed -i -e 's/6379/7003/g' -e 's/dir .\//dir \/home\/redis\/7003\//g' 7003/redis.conf

5、修改每个实例的声明IP

  • 为了避免将来混乱,在redis.conf文件中指定每一个实例的绑定ip信息
# redis实例的声明 IP
replica-announce-ip 172.16.156.139
  • 每个目录都要改,我们一键完成修改
cd /home/redis
# 逐一执行
sed -i '1a replica-announce-ip 172.16.156.139' 7001/redis.conf
sed -i '1a replica-announce-ip 172.16.156.139' 7002/redis.conf
sed -i '1a replica-announce-ip 172.16.156.139' 7003/redis.conf

# 或者一键修改
printf '%s\n' 7001 7002 7003 | xargs -I{} -t sed -i '1a replica-announce-ip 172.16.156.139' {}/redis.conf

6、启动

cd /home/redis/redis-7.0.5/src
# 第1个
./redis-server ../../7001/redis.conf
# 第2个
./redis-server ../../7002/redis.conf
# 第3个
./redis-server ../../7003/redis.conf

7、开启主从关系

现在三个实例还没有任何关系,要配置主从可以使用replicaof 或者slaveof(5.0以前)命令。
有临时和永久两种模式:

  • 修改配置文件(永久生效)
    • 在redis.conf中添加一行配置:slaveof
  • 使用redis-cli客户端连接到redis服务,执行slaveof命令(重启后失效)
slaveof  

注意:在5.0以后新增命令replicaof,与salveof效果一致。

  • 使用方式二
cd /home/redis/redis-7.0.5/src
# 连接 7002
redis-cli -p 7002
# 执行slaveof
slaveof 172.16.156.139 7001

# 连接 7003
redis-cli -p 7003
# 执行slaveof
slaveof 172.16.156.139 7001
  • 然后连接 7001节点,查看集群状态:
# 连接 7001
redis-cli -p 7001
# 查看状态
info replication

9.3、搭建哨兵集群

1、准备实例和配置

# 进入目录
cd /home/redis
# 创建目录
mkdir s1 s2 s3
  • 在s1目录创建一个sentinel.conf文件,添加下面的内容:
port 27001
sentinel announce-ip 172.16.156.139
sentinel monitor mymaster 172.16.156.139 7001 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
dir "/home/redis/s1"

解读:

  • port 27001:是当前sentinel实例的端口
  • sentinel monitor mymaster 172.16.156.139 7001 2:指定主节点信息
    • mymaster:主节点名称,自定义,任意写
    • 172.16.156.139 7001:主节点的ip和端口
    • 2:选举master时的quorum值
  • 将s1/sentinel.conf文件拷贝到s2、s3两个目录中(在/home/redis目录执行下列命令)
cd /home/redis
# 方式一:逐个拷贝
cp s1/sentinel.conf s2
cp s1/sentinel.conf s3
# 方式二:管道组合命令,一键拷贝
echo s2 s3 | xargs -t -n 1 cp s1/sentinel.conf
  • 修改s2、s3两个文件夹内的配置文件,将端口分别修改为27002、27003:
sed -i -e 's/27001/27002/g' -e 's/s1/s2/g' s2/sentinel.conf
sed -i -e 's/27001/27003/g' -e 's/s1/s3/g' s3/sentinel.conf

2、启动

cd /home/redis
# 第1个
redis-sentinel s1/sentinel.conf
# 第2个
redis-sentinel s2/sentinel.conf
# 第3个
redis-sentinel s3/sentinel.conf

Redis默认配置文件

github查看

# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf

# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.

################################## INCLUDES ###################################

# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings.  Include files can include
# other files, so use this wisely.
#
# Note that option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# Included paths may contain wildcards. All files matching the wildcards will
# be included in alphabetical order.
# Note that if an include path contains a wildcards but no files match it when
# the server is started, the include statement will be ignored and no error will
# be emitted.  It is safe, therefore, to include wildcard files from empty
# directories.
#
# include /path/to/local.conf
# include /path/to/other.conf
# include /path/to/fragments/*.conf
#

################################## MODULES #####################################

# Load modules at startup. If the server is not able to load modules
# it will abort. It is possible to use multiple loadmodule directives.
#
# loadmodule /path/to/my_module.so
# loadmodule /path/to/other_module.so

################################## NETWORK #####################################

# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all available network interfaces on the host machine.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
# Each address can be prefixed by "-", which means that redis will not fail to
# start if the address is not available. Being not available only refers to
# addresses that does not correspond to any network interface. Addresses that
# are already in use will always fail, and unsupported protocols will always BE
# silently skipped.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1     # listens on two specific IPv4 addresses
# bind 127.0.0.1 ::1              # listens on loopback IPv4 and IPv6
# bind * -::*                     # like the default, all available interfaces
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only on the
# IPv4 and IPv6 (if available) loopback interface addresses (this means Redis
# will only be able to accept client connections from the same host that it is
# running on).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# COMMENT OUT THE FOLLOWING LINE.
#
# You will also need to set a password unless you explicitly disable protected
# mode.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1 -::1

# By default, outgoing connections (from replica to master, from Sentinel to
# instances, cluster bus, etc.) are not bound to a specific local address. In
# most cases, this means the operating system will handle that based on routing
# and the interface through which the connection goes out.
#
# Using bind-source-addr it is possible to configure a specific address to bind
# to, which may also affect how the connection gets routed.
#
# Example:
#
# bind-source-addr 10.0.0.1

# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and the default user has no password, the server
# only accepts local connections from the IPv4 address (127.0.0.1), IPv6 address
# (::1) or Unix domain sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured.
protected-mode yes

# Redis uses default hardened security configuration directives to reduce the
# attack surface on innocent users. Therefore, several sensitive configuration
# directives are immutable, and some potentially-dangerous commands are blocked.
#
# Configuration directives that control files that Redis writes to (e.g., 'dir'
# and 'dbfilename') and that aren't usually modified during runtime
# are protected by making them immutable.
#
# Commands that can increase the attack surface of Redis and that aren't usually
# called by users are blocked by default.
#
# These can be exposed to either all connections or just local ones by setting
# each of the configs listed below to either of these values:
#
# no    - Block for any connection (remain immutable)
# yes   - Allow for any connection (no protection)
# local - Allow only for local connections. Ones originating from the
#         IPv4 address (127.0.0.1), IPv6 address (::1) or Unix domain sockets.
#
# enable-protected-configs no
# enable-debug-command no
# enable-module-command no

# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379

# TCP listen() backlog.
#
# In high requests-per-second environments you need a high backlog in order
# to avoid slow clients connection issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511

# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /run/redis.sock
# unixsocketperm 700

# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0

# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Force network equipment in the middle to consider the connection to be
#    alive.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300

# Apply OS-specific mechanism to mark the listening socket with the specified
# ID, to support advanced routing and filtering capabilities.
#
# On Linux, the ID represents a connection mark.
# On FreeBSD, the ID represents a socket cookie ID.
# On OpenBSD, the ID represents a route table ID.
#
# The default value is 0, which implies no marking is required.
# socket-mark-id 0

################################# TLS/SSL #####################################

# By default, TLS/SSL is disabled. To enable it, the "tls-port" configuration
# directive can be used to define TLS-listening ports. To enable TLS on the
# default port, use:
#
# port 0
# tls-port 6379

# Configure a X.509 certificate and private key to use for authenticating the
# server to connected clients, masters or cluster peers.  These files should be
# PEM formatted.
#
# tls-cert-file redis.crt
# tls-key-file redis.key
#
# If the key file is encrypted using a passphrase, it can be included here
# as well.
#
# tls-key-file-pass secret

# Normally Redis uses the same certificate for both server functions (accepting
# connections) and client functions (replicating from a master, establishing
# cluster bus connections, etc.).
#
# Sometimes certificates are issued with attributes that designate them as
# client-only or server-only certificates. In that case it may be desired to use
# different certificates for incoming (server) and outgoing (client)
# connections. To do that, use the following directives:
#
# tls-client-cert-file client.crt
# tls-client-key-file client.key
#
# If the key file is encrypted using a passphrase, it can be included here
# as well.
#
# tls-client-key-file-pass secret

# Configure a DH parameters file to enable Diffie-Hellman (DH) key exchange,
# required by older versions of OpenSSL (<3.0). Newer versions do not require
# this configuration and recommend against it.
#
# tls-dh-params-file redis.dh

# Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL
# clients and peers.  Redis requires an explicit configuration of at least one
# of these, and will not implicitly use the system wide configuration.
#
# tls-ca-cert-file ca.crt
# tls-ca-cert-dir /etc/ssl/certs

# By default, clients (including replica servers) on a TLS port are required
# to authenticate using valid client side certificates.
#
# If "no" is specified, client certificates are not required and not accepted.
# If "optional" is specified, client certificates are accepted and must be
# valid if provided, but are not required.
#
# tls-auth-clients no
# tls-auth-clients optional

# By default, a Redis replica does not attempt to establish a TLS connection
# with its master.
#
# Use the following directive to enable TLS on replication links.
#
# tls-replication yes

# By default, the Redis Cluster bus uses a plain TCP connection. To enable
# TLS for the bus protocol, use the following directive:
#
# tls-cluster yes

# By default, only TLSv1.2 and TLSv1.3 are enabled and it is highly recommended
# that older formally deprecated versions are kept disabled to reduce the attack surface.
# You can explicitly specify TLS versions to support.
# Allowed values are case insensitive and include "TLSv1", "TLSv1.1", "TLSv1.2",
# "TLSv1.3" (OpenSSL >= 1.1.1) or any combination.
# To enable only TLSv1.2 and TLSv1.3, use:
#
# tls-protocols "TLSv1.2 TLSv1.3"

# Configure allowed ciphers.  See the ciphers(1ssl) manpage for more information
# about the syntax of this string.
#
# Note: this configuration applies only to <= TLSv1.2.
#
# tls-ciphers DEFAULT:!MEDIUM

# Configure allowed TLSv1.3 ciphersuites.  See the ciphers(1ssl) manpage for more
# information about the syntax of this string, and specifically for TLSv1.3
# ciphersuites.
#
# tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256

# When choosing a cipher, use the server's preference instead of the client
# preference. By default, the server follows the client's preference.
#
# tls-prefer-server-ciphers yes

# By default, TLS session caching is enabled to allow faster and less expensive
# reconnections by clients that support it. Use the following directive to disable
# caching.
#
# tls-session-caching no

# Change the default number of TLS sessions cached. A zero value sets the cache
# to unlimited size. The default size is 20480.
#
# tls-session-cache-size 5000

# Change the default timeout of cached TLS sessions. The default timeout is 300
# seconds.
#
# tls-session-cache-timeout 60

################################# GENERAL #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
# When Redis is supervised by upstart or systemd, this parameter has no impact.
daemonize no

# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
#   supervised no      - no supervision interaction
#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
#                        requires "expect stop" in your upstart job config
#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
#                        on startup, and updating Redis status on a regular
#                        basis.
#   supervised auto    - detect upstart or systemd method based on
#                        UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
#       They do not enable continuous pings back to your supervisor.
#
# The default is "no". To run under upstart/systemd, you can simply uncomment
# the line below:
#
# supervised auto

# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
#
# Note that on modern Linux systems "/run/redis.pid" is more conforming
# and should be used instead.
pidfile /var/run/redis_6379.pid

# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice

# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""

# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no

# Specify the syslog identity.
# syslog-ident redis

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0

# To disable the built in crash log, which will possibly produce cleaner core
# dumps when they are needed, uncomment the following:
#
# crash-log-enabled no

# To disable the fast memory check that's run as part of the crash log, which
# will possibly let redis terminate sooner, uncomment the following:
#
# crash-memcheck-enabled no

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT  where
# dbid is a number between 0 and 'databases'-1
databases 16

# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY and syslog logging is
# disabled. Basically this means that normally a logo is displayed only in
# interactive sessions.
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
always-show-logo no

# By default, Redis modifies the process title (as seen in 'top' and 'ps') to
# provide some runtime information. It is possible to disable this and leave
# the process name as executed by setting the following to no.
set-proc-title yes

# When changing the process title, Redis uses the following template to construct
# the modified title.
#
# Template variables are specified in curly brackets. The following variables are
# supported:
#
# {title}           Name of process as executed if parent, or type of child process.
# {listen-addr}     Bind address or '*' followed by TCP or TLS port listening on, or
#                   Unix socket if only that's available.
# {server-mode}     Special mode, i.e. "[sentinel]" or "[cluster]".
# {port}            TCP port listening on, or 0.
# {tls-port}        TLS port listening on, or 0.
# {unixsocket}      Unix domain socket listening on, or "".
# {config-file}     Name of configuration file used.
#
proc-title-template "{title} {listen-addr} {server-mode}"

# Set the local environment which is used for string comparison operations, and 
# also affect the performance of Lua scripts. Empty String indicates the locale 
# is derived from the environment variables.
locale-collate ""

################################ SNAPSHOTTING  ################################

# Save the DB to disk.
#
# save   [  ...]
#
# Redis will save the DB if the given number of seconds elapsed and it
# surpassed the given number of write operations against the DB.
#
# Snapshotting can be completely disabled with a single empty string argument
# as in following example:
#
# save ""
#
# Unless specified otherwise, by default Redis will save the DB:
#   * After 3600 seconds (an hour) if at least 1 change was performed
#   * After 300 seconds (5 minutes) if at least 100 changes were performed
#   * After 60 seconds if at least 10000 changes were performed
#
# You can set these explicitly by uncommenting the following line.
#
# save 3600 1 300 100 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# By default compression is enabled as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes

# Enables or disables full sanitization checks for ziplist and listpack etc when
# loading an RDB or RESTORE payload. This reduces the chances of a assertion or
# crash later on while processing commands.
# Options:
#   no         - Never perform full sanitization
#   yes        - Always perform full sanitization
#   clients    - Perform full sanitization only for user connections.
#                Excludes: RDB files, RESTORE commands received from the master
#                connection, and client connections which have the
#                skip-sanitize-payload ACL flag.
# The default should be 'clients' but since it currently affects cluster
# resharding via MIGRATE, it is temporarily set to 'no' by default.
#
# sanitize-dump-payload no

# The filename where to dump the DB
dbfilename dump.rdb

# Remove RDB files used by replication in instances without persistence
# enabled. By default this option is disabled, however there are environments
# where for regulations or other security concerns, RDB files persisted on
# disk by masters in order to feed replicas, or stored on disk by replicas
# in order to load them for the initial synchronization, should be deleted
# ASAP. Note that this option ONLY WORKS in instances that have both AOF
# and RDB persistence disabled, otherwise is completely ignored.
#
# An alternative (and sometimes better) way to obtain the same effect is
# to use diskless replication on both master and replicas instances. However
# in the case of replicas, diskless is not always an option.
rdb-del-sync-files no

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./

################################# REPLICATION #################################

# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
#   +------------------+      +---------------+
#   |      Master      | ---> |    Replica    |
#   | (receive writes) |      |  (exact copy) |
#   +------------------+      +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
#    stop accepting writes if it appears to be not connected with at least
#    a given number of replicas.
# 2) Redis replicas are able to perform a partial resynchronization with the
#    master if the replication link is lost for a relatively small amount of
#    time. You may want to configure the replication backlog size (see the next
#    sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
#    network partition replicas automatically try to reconnect to masters
#    and resynchronize with them.
#
# replicaof  

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
# masterauth 
#
# However this is not enough if you are using Redis ACLs (for Redis version
# 6 or greater), and the default user is not capable of running the PSYNC
# command and/or other commands needed for replication. In this case it's
# better to configure a special user to use with replication, and specify the
# masteruser configuration as such:
#
# masteruser 
#
# When masteruser is specified, the replica will authenticate against its
# master using the new AUTH form: AUTH  .

# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
#
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
#    still reply to client requests, possibly with out of date data, or the
#    data set may just be empty if this is the first synchronization.
#
# 2) If replica-serve-stale-data is set to 'no' the replica will reply with error
#    "MASTERDOWN Link with MASTER is down and replica-serve-stale-data is set to 'no'"
#    to all data access commands, excluding commands such as:
#    INFO, REPLICAOF, AUTH, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,
#    UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,
#    HOST and LATENCY.
#
replica-serve-stale-data yes

# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using 'rename-command' to shadow all the
# administrative / dangerous commands.
replica-read-only yes

# Replication SYNC strategy: disk or socket.
#
# New replicas and reconnecting replicas that are not able to continue the
# replication process just receiving differences, need to do what is called a
# "full synchronization". An RDB file is transmitted from the master to the
# replicas.
#
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
#                 file on disk. Later the file is transferred by the parent
#                 process to the replicas incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
#              RDB file to replica sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child
# producing the RDB file finishes its work. With diskless replication instead
# once the transfer starts, new replicas arriving will be queued and a new
# transfer will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple
# replicas will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync yes

# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the replicas.
#
# This is important since once the transfer starts, it is not possible to serve
# new replicas arriving, that will be queued for the next RDB transfer, so the
# server waits a delay in order to let more replicas arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5

# When diskless replication is enabled with a delay, it is possible to let
# the replication start before the maximum delay is reached if the maximum
# number of replicas expected have connected. Default of 0 means that the
# maximum is not defined and Redis will wait the full delay.
repl-diskless-sync-max-replicas 0

# -----------------------------------------------------------------------------
# WARNING: Since in this setup the replica does not immediately store an RDB on
# disk, it may cause data loss during failovers. RDB diskless load + Redis
# modules not handling I/O reads may cause Redis to abort in case of I/O errors
# during the initial synchronization stage with the master.
# -----------------------------------------------------------------------------
#
# Replica can load the RDB it reads from the replication link directly from the
# socket, or store the RDB to a file and read that file after it was completely
# received from the master.
#
# In many cases the disk is slower than the network, and storing and loading
# the RDB file may increase replication time (and even increase the master's
# Copy on Write memory and replica buffers).
# However, when parsing the RDB file directly from the socket, in order to avoid
# data loss it's only safe to flush the current dataset when the new dataset is
# fully loaded in memory, resulting in higher memory usage.
# For this reason we have the following options:
#
# "disabled"    - Don't use diskless load (store the rdb file to the disk first)
# "swapdb"      - Keep current db contents in RAM while parsing the data directly
#                 from the socket. Replicas in this mode can keep serving current
#                 dataset while replication is in progress, except for cases where
#                 they can't recognize master as having a data set from same
#                 replication history.
#                 Note that this requires sufficient memory, if you don't have it,
#                 you risk an OOM kill.
# "on-empty-db" - Use diskless load only when current dataset is empty. This is 
#                 safer and avoid having old and new dataset loaded side by side
#                 during replication.
repl-diskless-load disabled

# Master send PINGs to its replicas in a predefined interval. It's possible to
# change this interval with the repl_ping_replica_period option. The default
# value is 10 seconds.
#
# repl-ping-replica-period 10

# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
# 2) Master timeout from the point of view of replicas (data, pings).
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica. The default
# value is 60 seconds.
#
# repl-timeout 60

# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no

# Set the replication backlog size. The backlog is a buffer that accumulates
# replica data when replicas are disconnected for some time, so that when a
# replica wants to reconnect again, often a full resync is not needed, but a
# partial resync is enough, just passing the portion of data the replica
# missed while disconnected.
#
# The bigger the replication backlog, the longer the replica can endure the
# disconnect and later be able to perform a partial resynchronization.
#
# The backlog is only allocated if there is at least one replica connected.
#
# repl-backlog-size 1mb

# After a master has no connected replicas for some time, the backlog will be
# freed. The following option configures the amount of seconds that need to
# elapse, starting from the time the last replica disconnected, for the backlog
# buffer to be freed.
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with other replicas: hence they should always accumulate backlog.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600

# The replica priority is an integer number published by Redis in the INFO
# output. It is used by Redis Sentinel in order to select a replica to promote
# into a master if the master is no longer working correctly.
#
# A replica with a low priority number is considered better for promotion, so
# for instance if there are three replicas with priority 10, 100, 25 Sentinel
# will pick the one with priority 10, that is            
            
                        
网站题目:Docker安装Redis单机/集群 -- Linux
标题链接:http://njwzjz.com/article/dsopdci.html