Migration to Huawei Cloud
Module 9: Using and Migrating Caches and
Queues
Objectives
⚫ Upon completion of this course, you will:
Understand Huawei Cloud caching and queuing services.
Understand how to migrate caches and queues to corresponding
services.
Be able to perform hands-on exercises.
3
Contents
1. Service Overview
2. Migrating Self-Built Redis Services
3. Migrating Self-Built Kafka Services
4
Distributed Cache Service
A Huawei Cloud implementation of
open-source services
Supports data persistence by default
Supports high-performance clusters
Distributed Cache Service
(DCS) for Redis and cross-AZ HA
Essentially a key-value database
5
GaussDB(for Redis)
Huawei-developed database engine
Compatible with Redis and supports
high QPS
GaussDB(for Redis) Supports high-performance clusters
and cross-AZ HA
Essentially a key-value database
6
Typical Usage: Redis as a Cache
Advantages
Access the cache first,
and data is found. • There is no requirement on
Redis (cache) cache reliability.
• Only required data is
cached.
Disadvantages
In case of a cache miss, RDS • A cache miss causes a
Application read data from the (database)
database and then write it performance penalty.
to the cache. • Dirty data may exist.
Data is directly written
to the database.
7
Typical Usage: Redis as a Key-Value Database
Advan Advantages
tages
• A high-performance database
Write data based • Flexible data structures
on keys
• Almost no limit on the data size
Redis cache
Application Read data based (possibly a
on keys Disadvantages
Disadvantages
cluster)
• No transaction capability
• Complex queries not supported
8
Distributed Message Service
A Huawei Cloud implementation of open-
source Kafka, RabbitMQ, and RocketMQ
Hundreds of millions of messages
Connects message consumers and
Distributed Message
producers in loosely coupled
Service (DMS)
architecture
Cross-AZ HA
9
Producer/Consumer Model Brings Loose Coupling
Message producers Message consumers
only publish messages poll for messages
Kafka instance
10
Contents
1. Service Overview
2. Migrating Self-Built Redis Services
3. Migrating Self-Built Kafka Services
11
Redis Migration Principles
Offline Migration Principles Online Migration Principles
• Append Only File (AOF) • PSYNC
• Redis Database (RDB) • SCAN
12
Redis Migration Principles - AOF
⚫ Redis provides RDB and AOF persistence mechanisms:
RDB saves database snapshots to disks in binary mode.
AOF persistence logs all commands (including parameters) written to the database in the protocol format to record your dataset.
Command content
Client Command
Server in the network AOF file
request
protocol format
The four commands are saved in the AOF
For example, run the following commands:
file as follows: The AOF file is truncated here
redis> RPUSH list 1 2 3 4 redis> RPOP list
*2 *6 *2 *2
(integet) 4 "4" by command for layout reasons.
$6 $5 $4 $4
SELECT RPUSH RPOP LPOP
redis> LRANGE list 0 -1 redis> LPOP list
$1 $4 $4 $4
1) "1" "1" The SELECT command is added by
0 list list list
2) "2"
$1 *2 *3 Redis. Other commands are
3) "3" redis> LPUSH list 1
1 $4 $5
4) "4" (integer) 3 executed on the client.
$1 LPUSH
2 $4
$1 list
redis> KEYS *
3 $1
1) "list"
$1 1
4
13
Redis Migration Principles - AOF
⚫ Command persistence with AOF can be divided into three phases:
Command sending: Redis sends information such as the executed commands, command parameters, and number of command parameters to the AOF
program.
Cache appending: The AOF program converts the received commands into the network communication protocol format, and then appends the
protocol content to the AOF buffer of the server.
File writing and saving: The content in the AOF buffer is appended to the end of the AOF file. If the specified AOF saving conditions are
met, the fsync or fdatasync function is invoked to save the written content to the disk.
Currently, Redis supports three AOF persistence frequencies:
• AOF_FSYNC_NO: Do not persist data.
• AOF_FSYNC_EVERYSEC: Persist data once every second.
• AOF_FSYNC_ALWAYS: Persist data once after every write.
By default, Huawei Cloud DCS for Redis saves data once every
second. You can modify this configuration on the console.
14
Redis Migration Principles - RDB
⚫ RDB persistence, or snapshot persistence, is to generate a point-in-time snapshot of your dataset in the current
process and save the snapshot as an .rdb file to the disk. When restarting, Redis can restore data from the
snapshot file. # RDB files are binary, so records are not separated by line breaks.
52 45 44 49 53 # The file starts with the string "REDIS".
30 30 30 33 # RDB version number in big endian. Here, the version is 0003.
FE 00 # "FE" indicates the database (DB) number. Redis supports multi-DB and DBs are numbered. Here, 00 indicates
# Key-value pairs start.
DB0.
FD $length-encoding # "FD" indicates the expiration time, which is encoded using length encoding.
$value-type # One byte is used to indicate the value type, such as set, hash, list, and zset.
$string-encoded-key # A key, which is encoded using string encoding.
An RDB $encoded-value # A value. Encoding varies according to the value type.
file FC $length-encoding # "FC" indicates the expiration time in milliseconds. The time is stored using length
$value-type # encoding.
One byte is used to indicate the value type.
$string-encoded-key # A key encoded using string encoding.
$encoded-value # An encoded value (Encoding varies according to the value type.)
$value-type # The following key-value pair does not have expiration. To prevent
$string-encoded-key conflicts, the data type does not start with FD, FC, FE, or FF.
$encoded-value
FE $length-encoding # The next DB starts. The DB number is encoded using length encoding.
... # Continue to store key-value pairs in the database.
FF ## "FF" indicates the end of the RDB file.
15
Redis Migration Principles - PSYNC
Currently, Redis uses PSYNC for online incremental data synchronization and master-replica replication.
The client sends
PSYNC involves three concepts: runid, offset, and replication backlog buffer. SLAVEOF.
Runid
First Y
Each Redis server has an ID indicating its identity. This ID is sent in PSYNC to replication?
N
indicate the previously connected master. If the ID is not saved, PSYNC ?-1 will be sent
to the master, indicating that full replication is required. PSYNC runid
PSYNC ?-1
offset
Offset
The master and the replica maintain their own offsets. After successfully sending N
bytes, the master increases its offset by N. After receiving the N bytes, the replica CONTINUE? N Full sync
also increases its offset by N. If the master and the replica are in the same state,
Y
their offsets should be the same.
Replication backlog buffer Partial sync
The replication backlog buffer is a fixed-length FIFO queue maintained by the master. It
buffers the commands that have been sent. While sending commands to all replicas, the
master
16 also writes the commands to the backlog buffer.
Redis Migration Principles - PSYNC
⚫ PSYNC data synchronization process:
Send psync ?-1.
The master returns FULLRESYNC according to the command.
The master ID and offset are recorded.
The master runs BGSAVE and saves the RDB file locally.
The master sends the RDB file to the replica.
The replica receives the RDB file and loads it into memory.
When sending data, the master buffers new data to the replication backlog. After
the replica has loaded the RDB, the master sends the new data. (If the replica
spends a long time in this step, the buffer overflows and the full synchronization
17
fails.)
Redis Migration Principles - SCAN
⚫ Some tools use SCAN for online full synchronization of Redis. SCAN can traverse data in Redis
but cannot process incremental data, so it can only be used for full synchronization.
redis 127.0.0.1:6379> scan 0 # The cursor is set to 0, indicating a new iteration.
1) "17" # The cursor is set to 0, indicating a new iteration.
2) 1) "key:12"
2) "key:8"
3) "key:4"
4) "key:14"
2) "key:8"
5) "key:16"
2) "key:8"
6) "key:17"
7) "key:15"
8) "key:10"
9) "key:3"
10) "key:7"
11) "key:1"
redis 127.0.0.1:6379> scan 17 # Cursor 17 returned during the first iteration is used to start a new
iteration.
1) "17"
2) 1) "key:5"
2) "key:18"
3) "key:0"
4) "key:2"
2) "key:8"
5) "key:19"
2) "key:8"
6) "key:13"
7) "key:6"
8) "key:9"
9) "key:11"
18
Redis Migration Tools
Tool/Service Feature Scenario
Simple operations; both Migration from IDC to Huawei Cloud, from another cloud
online and offline to Huawei Cloud, from Huawei Cloud to IDC, and from
DCS console migration; migration Huawei Cloud to another cloud
from a higher Redis When migrating from another cloud to Huawei Cloud,
version to a lower one enable the PSYNC command on the source Redis.
Migration from IDC to Huawei Cloud, from another
Both online and cloud to Huawei Cloud, from Huawei Cloud to IDC, and
redis-shake offline migration; from Huawei Cloud to another cloud
fast offline migration For online migration, enable the PSYNC command on
the source Redis.
19
Redis Migration Tools - DCS Console
20
Redis Migration Tools - DCS Console
21
Redis Migration Tools - DCS Console
22
Redis Migration Schemes - Online Migration
Migration process
Phase 1: Prepare for data synchronization.
Customer applications
Read/Write
XX Cloud Huawei Cloud
1. Create target Redis on Huawei
Cloud.
Service Service 2. Create an online migration task to
synchronize data from the source
Read/Write
Redis to the target Redis.
Incremental
Redis sync
3. Create a backup Redis at the
Redis
Incremental source.
sync
Backup 4. Create an online migration task to
Redis
synchronize data from the target
Redis to the backup Redis.
23
Redis Migration Schemes - Online Migration
Migration process
Phase 2: Stop the service from writing
data.
Customer applications
Read/Write
XX Cloud Huawei Cloud
5. Stop writing data to the source Redis.
Monitor the source Redis and check that
Service Service
it has no traffic. If the number of
Read/Write requests and traffic volume are 0,
Incremental
Redis sync there is no traffic.
Redis 6. Check that the offset of the online
Incremental
sync incremental migration task is 0 on the
Backup
Redis GUI or by calling an API.
24
Redis Migration Schemes - Online Migration
Migration process
Phase 3: Stop the migration task.
Customer applications
XX Cloud Huawei Cloud
7. Stop the online migration task by
Service Service clicking the stop button on the GUI
or by calling an API.
Incremental
Redis sync
Redis
Incremental
sync
Backup
Redis
25
Redis Migration Schemes - Online Migration
Migration process
Phase 4: Switch the service access
address.
Customer applications
7. The customer switches the access
XX Cloud Read/Write Huawei Cloud
address in the service to the
target Redis address.
Service Service After the switchover, the customer
verifies service functions. If they
Read/Write
work properly, the migration is
Redis
complete. If they are not
Redis
Incremental functioning, proceed with the
sync
Backup subsequent rollback operations.
Redis
26
Redis Migration Schemes - Online Migration
Rollback process
Rollback phase 1: Stop the service from writing
data.
Customer applications
XX Cloud Read/Write Huawei Cloud 1. Stop writing data to the target Redis.
Monitor the target Redis and check that
it has no traffic. If the number of
Service Service
requests and traffic volume are 0,
Read/Write there is no traffic.
Redis 2. Check that the offset of the online
Redis incremental migration task is 0 on the
Incremental
sync GUI or by calling an API.
Backup
Redis
27
Redis Migration Schemes - Online Migration
Rollback process
Rollback phase 2: Stop data synchronization.
Customer applications
XX Cloud Huawei Cloud
3. Stop the online migration task by clicking
Service Service the stop button on the GUI or by calling an
API.
Redis
Incremental Redis
sync
Backup
Redis
28
Redis Migration Schemes - Online Migration
Rollback process
Rollback phase 3: Switch back the
service.
Customer applications
XX Cloud Read/Write Huawei Cloud
4. The customer switches back the access
address in the service to the source Redis
Service Service address.
Read/Write Redis
Redis
Backup
Redis
29
Contents
1. Service Overview
2. Migrating Self-Built Redis Services
3. Migrating Self-Built Kafka Services
31
Kafka Switchover and Rollback
Self-built Kafka Customer Huawei Cloud
① ⑥
application
Application Application Application Application
(production service) (consumption service) (production service) (consumption service)
② ④ ③ ⑤
Kafka instance DMS
Switchover Rollback
1. Migrate production: Stop connections ① between the customer Similar to the migration process: Stop traffic and
application and services, stop production service ②, and start production services on Huawei Cloud. After all
production service ③ on Huawei Cloud. messages are consumed, stop the consumption service,
2. Migrate consumption: When all messages have been consumed, stop and then start services on the self-built Kafka.
consumption service ④, and start consumption service ⑤ on Huawei
Cloud.
3. If a service is both a consumer and a producer, switch them together
when consumption completes.
32
Phase 1: Stop Production Services
Customer application
XX cloud Read/Write 1 Huawei Cloud
1. Run the following command to stop
connections between the customer
Production Consumption Production Consumption
application and the production service
service service service service
until all messages in the consumer group
Write Read are consumed (the lag field is 0): ./kafka-
1 2 consumer-groups.sh --bootstrap-server
{connect_address} --describe --group test.
Kafka Kafka
2. After message consumption completes, stop
the consumption service.
33
Phase 2: The New Application Uses Huawei Cloud
Kafka for Production and Consumption
Customer application
XX cloud Read/Write Huawei Cloud
1. Start the message production
Production Consumption Production Consumption
service service service service and consumption services on
Huawei Cloud, receive the
Write Read
customer traffic, and check
whether the services are
Kafka Kafka normal.
34
Rollback Phase 1: Stop Production Services
Customer application
1. Run the following command to stop
Read/Write
XX cloud 1 Huawei Cloud connections between the customer
application and the production
service until all messages in the
Production Consumption Production Consumption
service service service service
consumer group are consumed (the lag
field is 0): ./kafka-consumer-
Write Read
1 2 groups.sh --bootstrap-server
{connect_address} --describe --group
Kafka Kafka test.
2. After message consumption completes,
stop the consumption service.
35
Rollback Phase 2: Switch Applications Back to
the Original Kafka
Customer application
XX cloud Read/Write Huawei Cloud
Production
1. Start the original message
Consumption Production Consumption
service service service service
production and consumption
services, receive the customer
Write Read Write Read traffic, and check whether the
services are normal.
Kafka Kafka
36
Thank You.
Copyright©2023 Huawei Technologies Co., Ltd. All Rights Reserved.
The information in this document may contain predictive statements including,
without limitation, statements regarding the future financial and operating
results, future product portfolio, new technology, etc. There are a number of
factors that could cause actual results and developments to differ materially
from those expressed or implied in the predictive statements. Therefore, such
information is provided for reference purpose only and constitutes neither an
offer nor an acceptance. Huawei may change the information at any time
without notice.
37