集群部署

1. 工具及环境准备

1.1 工具说明

请在官网下载部署基础包,工具包包含以下内容:

Nebula Graph官方提供rpm和deb发行包,因此在CentOS、Ubuntu或debian上都可 以直接使用而无需编译,下载链接:https://nebula-graph.io/download/ CentOS 7或8都可以下载对应的版本,ubuntu支持18.04/20.04这2个版本, 对于debian的话,如果是debian 10,需要下载ubuntu1804的包,如果是debian 11,需要下载ubuntu2004的包,因为ubuntu18.04和ubuntu 20.04分别是基于 debian 10和debian 11的,下载后直接安装即可。

本次使用的是 nebula-graph-3.1.0.el7.x86_64.rpmnebula-console-linux-amd64-v3.0.0nebula-graph-studio-3.3.2.x86_64.rpm

基础包 介绍 下载地址
nebula-graph-3.1.0.el7.x86_64.rpm(适用于Centos7) nebula安装包 https://oss-cdn.nebula-graph.com.cn/package/3.1.0/nebula-graph-3.1.0.el7.x86_64.tar.gz?response-content-type=application/octet-stream
nebula-graph-3.1.0.el8.x86_64.rpm(适用于Centos8) nebula安装包 https://oss-cdn.nebula-graph.com.cn/package/3.1.0/nebula-graph-3.1.0.el8.x86_64.tar.gz?response-content-type=application/octet-stream
nebula-graph-3.1.0.ubuntu1804.amd64.deb(适用于Ubuntu1804) nebula安装包 https://oss-cdn.nebula-graph.com.cn/package/3.1.0/nebula-graph-3.1.0.ubuntu1804.amd64.deb?response-content-type=application/octet-stream
nebula-graph-3.1.0.ubuntu2004.amd64.deb(适用于Ubuntu2004) nebula安装包 https://oss-cdn.nebula-graph.com.cn/package/3.1.0/nebula-graph-3.1.0.ubuntu2004.amd64.deb?response-content-type=application/octet-stream
nebula-console-linux-amd64-v3.0.0 nebula客户端 https://github.com/vesoft-inc/nebula-console/releases
nebula-graph-studio-3.3.2.x86_64.rpm nebula web服务 https://oss-cdn.nebula-graph.com.cn/nebula-graph-studio/3.3.2/nebula-graph-studio-3.3.2.x86_64.rpm

1.2 服务说明

服务名 端口 服务用途
Mate 9556 元数据服务
Graph 9669 查询服务
Storage 9779 数据存储服务
Studio 7001 nebula web界面,图数据库可视化工具

1.3 环境准备

1.3.1 Hosts 表

和主机配置一致,配置标准的主机名

1.3.2 防火墙和SELinux

1.3.3 SSH互信

需要配置SSH互信

1.3.4 Ntp时间同步

需要配置时间同步

2 Nebula Graph部署

Nebula Graph集群部署方案

机器名称 IP 地址 graphd 进程数量 storaged 进程数量 metad 进程数量
node2 192.168.0.221 1 1 1
node3 192.168.0.222 1 1 1
node4 192.168.0.220 1 1 1

2.1 Nebula Graph安装

查看机器系统版本

lsb_release -a

显示:

LSB Version:    :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description:    CentOS Linux release 7.5.1804 (Core) 
Release:        7.5.1804
Codename:       Core

根据机器系统选择对应版本的nebula安装包

在Nebula Graph官网下载页面找到对应的Centos 的 3.1.0 版本的rpm安装包下载,下载链接:https://nebula-graph.io/download/

执行安装命令

rpm -ivh nebula-graph-3.1.0.el7.x86_64.rpm

默认安装在 /usr/local/nebula/ 目录下

也可以使用参数 –prefix==<installation_path>指定安装目录,例如:

rpm -ivh --prefix=/opt/servers/nebula-graph-3.1.0 nebula-graph-3.1.0.el7.x86_64.rpm

安装之后为了方便可以将nebula的bin加入环境变量,方便操作

# 可以配置到/etc/profile或者~/.bashrc中
##NEBULA_HOME
export NEBULA_HOME=/usr/local/nebula
export PATH=$PATH:$NEBULA_HOME/bin
export PATH=$PATH:$NEBULA_HOME/scripts 
source /etc/profile

安装完成后,进行相关服务配置。

2.2 修改Nebula Graph配置文件

Nebula Graph的所有配置文件均位于安装目录的etc目录内(默认/user/local/nebula/etc),包括nebula-metad.confnebula-graphd.confnebula-storaged.conf

nebula-metad.conf

首先修改nebula-metad的配置文件/usr/local/nebula/etc/nebula-metad.conf,其中大部分配置默认即可,注意时区、数据目录、网络等配置需要进行修改,下面是修改后的配置

########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-metad.pid
# 新增时区配置 默认是UTC 改为东八区
# 这个时区是针对写入时间类型的数据进行转换 不影响日志输出
--timezone_name=UTC+08:00

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=metad-stdout.log
--stderr_log_file=metad-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# wether logging files' name contain time stamp, If Using logrotate to rotate logging files, than should set it to true.
--timestamp_in_logfile_name=true

########## networking ##########
# Comma separated Meta Server addresses
# 设置meta server,写ip 不要写hostname
--meta_server_addrs=192.168.0.220:9559,192.168.0.221:9559,192.168.0.222:9559
# Local IP used to identify the nebula-metad process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
# 设置本机ip
--local_ip=192.168.0.221
# Meta daemon listening port
--port=9559
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19559
# Port to listen on Storage with HTTP protocol, it corresponds to ws_http_port in storage's configuration file
--ws_storage_http_port=19779

########## storage ##########
# Root data path, here should be only single path for metad
# 设置meta元数据存储目录
--data_path=/data/nebula/meta

########## Misc #########
# The default number of parts when a space is created
--default_parts_num=100
# The default replica factor when a space is created
--default_replica_factor=3

--heartbeat_interval_secs=10
--agent_heartbeat_interval_secs=60

–default_parts_num和–default_replica_factor是创建空间时默认的分片和副本数,对于集群建议副本为3,当然也可以在创建时手动指定。

正常对于大规模的集群来说metad、storaged以及graphd都是单独分离部署的,对于小规模的集群来说也可以都部署在一块,正常metad建议配置多台以保证高可用性,一般的集群配置3个节点,大规模的集群配置5个节点。保存配置后,将配置文件同步到所有要meta服务的节点,注意各自节点要修改– local_ip为自己的实际ip,其余的配置没有需要单独修改的都保持一致。

nebula-storaged.conf

然后修改storaged的配置文件/usr/local/nebula/etc/nebula-storaged.conf,内容配置如下:

########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-storaged.pid
# Whether to use the configuration obtained from the configuration file
--local_config=true
# 同样需要配置时区
--timezone_name=UTC+08:00

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=storaged-stdout.log
--stderr_log_file=storaged-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# Wether logging files' name contain time stamp.
--timestamp_in_logfile_name=true

########## networking ##########
# Comma separated Meta server addresses
# meta服务器
--meta_server_addrs=192.168.0.220:9559,192.168.0.221:9559,192.168.0.222:9559
# Local IP used to identify the nebula-storaged process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
# 本机ip
--local_ip=192.168.0.221
# Storage daemon listening port
--port=9779
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19779
# heartbeat with meta service
--heartbeat_interval_secs=10

######### Raft #########
# Raft election timeout
--raft_heartbeat_interval_secs=30
# RPC timeout for raft client (ms)
--raft_rpc_timeout_ms=500
## recycle Raft WAL
--wal_ttl=14400

########## Disk ##########
# Root data path. Split by comma. e.g. --data_path=/disk1/path1/,/disk2/path2/
# One path per Rocksdb instance.
# 数据目录 支持配置多个
--data_path=/data/nebula/storage

# Minimum reserved bytes of each data path
--minimum_reserved_bytes=268435456

# The default reserved bytes for one batch operation
--rocksdb_batch_size=4096
# The default block cache size used in BlockBasedTable.
# The unit is MB.
--rocksdb_block_cache=4
# The type of storage engine, `rocksdb', `memory', etc.
--engine_type=rocksdb

# Compression algorithm, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
# For the sake of binary compatibility, the default value is snappy.
# Recommend to use:
#   * lz4 to gain more CPU performance, with the same compression ratio with snappy
#   * zstd to occupy less disk space
#   * lz4hc for the read-heavy write-light scenario
--rocksdb_compression=lz4

# Set different compressions for different levels
# For example, if --rocksdb_compression is snappy,
# "no:no:lz4:lz4::zstd" is identical to "no:no:lz4:lz4:snappy:zstd:snappy"
# In order to disable compression for level 0/1, set it to "no:no"
--rocksdb_compression_per_level=

# Whether or not to enable rocksdb's statistics, disabled by default
--enable_rocksdb_statistics=false

# Statslevel used by rocksdb to collection statistics, optional values are
#   * kExceptHistogramOrTimers, disable timer stats, and skip histogram stats
#   * kExceptTimers, Skip timer stats
#   * kExceptDetailedTimers, Collect all stats except time inside mutex lock AND time spent on compression.
#   * kExceptTimeForMutex, Collect all stats except the counters requiring to get time inside the mutex lock.
#   * kAll, Collect all stats
--rocksdb_stats_level=kExceptHistogramOrTimers

# Whether or not to enable rocksdb's prefix bloom filter, enabled by default.
--enable_rocksdb_prefix_filtering=true
# Whether or not to enable rocksdb's whole key bloom filter, disabled by default.
--enable_rocksdb_whole_key_filtering=false

############## Key-Value separation ##############
# Whether or not to enable BlobDB (RocksDB key-value separation support)
--rocksdb_enable_kv_separation=false
# RocksDB key value separation threshold in bytes. Values at or above this threshold will be written to blob files during flush or compaction.
--rocksdb_kv_separation_threshold=100
# Compression algorithm for blobs, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
--rocksdb_blob_compression=lz4
# Whether to garbage collect blobs during compaction
--rocksdb_enable_blob_garbage_collection=true

############## rocksdb Options ##############
# rocksdb DBOptions in json, each name and value of option is a string, given as "option_name":"option_value" separated by comma
--rocksdb_db_options={}
# rocksdb ColumnFamilyOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
--rocksdb_column_family_options={"write_buffer_size":"67108864","max_write_buffer_number":"4","max_bytes_for_level_base":"268435456"}
# rocksdb BlockBasedTableOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
--rocksdb_block_based_table_options={"block_size":"8192"}

上面除了文字说明的地方,其余的配置正常不需要修改,具体含义查看英文注释即可, 同样需要注意每个节点的–local_ip配置成各自实际的ip,其余的保持一致。

nebula-graphd.conf

然后就是配置graphd的配置文件/usr/local/nebula/etc/nebula-graphd.conf,配置如下:

########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-graphd.pid
# Whether to enable optimizer
--enable_optimizer=true
# The default charset when a space is created
--default_charset=utf8
# The default collate when a space is created
--default_collate=utf8_bin
# Whether to use the configuration obtained from the configuration file
--local_config=true

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=graphd-stdout.log
--stderr_log_file=graphd-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# wether logging files' name contain time stamp.
--timestamp_in_logfile_name=true
########## query ##########
# Whether to treat partial success as an error.
# This flag is only used for Read-only access, and Modify access always treats partial success as an error.
--accept_partial_success=false
# Maximum sentence length, unit byte
--max_allowed_query_size=4194304

########## networking ##########
# Comma separated Meta Server Addresses
# meta 服务器,请填写ip
--meta_server_addrs=192.168.0.220:9559,192.168.0.221:9559,192.168.0.222:9559
# Local IP used to identify the nebula-graphd process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
# 本地ip
--local_ip=192.168.0.221
# Network device to listen on
--listen_netdev=any
# Port to listen on
--port=9669
# To turn on SO_REUSEPORT or not
--reuse_port=false
# Backlog of the listen socket, adjust this together with net.core.somaxconn
--listen_backlog=1024
# The number of seconds Nebula service waits before closing the idle connections
--client_idle_timeout_secs=28800
# The number of seconds before idle sessions expire
# The range should be in [1, 604800]
--session_idle_timeout_secs=28800
# The number of threads to accept incoming connections
--num_accept_threads=1
# The number of networking IO threads, 0 for # of CPU cores
--num_netio_threads=0
# The number of threads to execute user queries, 0 for # of CPU cores
--num_worker_threads=0
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19669
# storage client timeout
--storage_client_timeout_ms=60000
# Port to listen on Meta with HTTP protocol, it corresponds to ws_http_port in metad's configuration file
--ws_meta_http_port=19559

########## authentication ##########
# Enable authorization
# 认证
--enable_authorize=true
# User login authentication type, password for nebula authentication, ldap for ldap authentication, cloud for cloud authentication
--auth_type=password

########## memory ##########
# System memory high watermark ratio, cancel the memory checking when the ratio greater than 1.0
# 内存占用警告阈值 超出后会限制写入 默认是0.8 这里改为0.9
--system_memory_high_watermark_ratio=0.9

########## metrics ##########
--enable_space_level_metrics=false

########## experimental feature ##########
# if use experimental features
--enable_experimental_feature=false

同样是–local_ip配置为当前节点的,其余配置保持一致并且已经在注释中说明。 保存配置后需要分别将nebula-metad.conf、nebula-storaged.conf、nebula-graphd.conf同步至所有的metad、storaged、graphd节点,并且各自修改自己的- -local_ip为当前节点的实际ip并保存。 另外建议各服务的端口尽量都用默认的不要修改,否则服务相互之间还得需要修改,容易出错,因此不是必须改的情况下尽量都保持默认。

分发:

cd /usr/local

scp -r nebula root@nede3:$PWD
scp -r nebula root@nede4:$PWD

更改 node3 和 node4 机器上的 nebula-metad.conf nebula-storaged.confnebula-graphd.conf 中的 local-ip 属性,换成每个机器的本机 ip。

2.3 启动Nebula Graph集群

每个节点的配置文件都同步并修改好各自的 --local_ip后,依次启动Nebula Graph集群的服务:

# 在所有meta节点上依次启动meta服务 并查看状态
/usr/local/nebula/scripts/nebula-metad.service start
/usr/local/nebula/scripts/nebula-metad.service status
# 在所有storage节点上依次启动storage服务
/usr/local/nebula/scripts/nebula-storaged.service start
/usr/local/nebula/scripts/nebula-storaged.service status
# 在所有graphd节点上依次启动graphd服务
/usr/local/nebula/scripts/nebula-graphd.service start
/usr/local/nebula/scripts/nebula-graphd.service status

也可以使用start all命令启动全部的服务(可能storaged服务会闪红很长时间,所以不推荐此方法启动

/usr/local/nebula/scripts/nebula.service start all

查看全部服务状态

/usr/local/nebula/scripts/nebula.service status all

下面是启动成功 例图:(之所以nebula-storaged 闪红 是因为 自nebula3 开始 存储服务 需要执行add hosts命令后才可正常运行。在 3.2 节会讲到如何添加 storaged服务)

image-20220624094601653

默认nebula-metad的端口为9559,nebula-graphd的端口是9669,nebula-storaged的端口是9779,我们正常需要连接的服务是nebula-graphd也就是9669端 口。

另外可以使用nebula-metad.service、nebula-graphd.service以及nebula-storaged.service单独管理,也可以使用nebula.service将第二个参数all换成 metad、graphd和storaged来管理都是可以的。

若想停止服务使用stop命令

# 停止全部服务
/usr/local/nebula/scripts/nebula.service stop all

# 分别停止服务
/usr/local/nebula/scripts/nebula-metad.service stop
/usr/local/nebula/scripts/nebula-storaged.service stop
/usr/local/nebula/scripts/nebula-graphd.service stop

注意:不要直接通过ps找到进程号然后kill -9,这样会存在数据丢失的风险。

当meta只启动1台会一直等待选举,当集群都启动后会选举出leader:

image-20220624143726955

集群配置并启动成功之后,可以测试metad的故障转移功能,可以停掉当前作为leader 的meta服务,然后观察故障切换情况,正常需要10~20s左右完成切换,如果meta可投票数不足,也就无法选举出leader,也就无法建立会话执行查询。 另外对于空间的副本数量,也必须要选择奇数个,因为主分片也是按照raft算法来选举的,如果设置两个的话,任意挂掉1个storage之后,整个集群就不可用了,分片故障转移的时间同样大约需要10~20s右,当storage恢复之后,分片默认不会自动均衡,那么这个时候可以手动均衡分片leader,从而减少单个storage节点的请求压力:

进入nebula-console客户端执行下列命令

BALANCE LEADER;
# 查看均衡情况
SHOW HOSTS;

这张是使用之前集群的图片结果:image-20220624150704310

查看主分片均衡完毕,每个storage上的主分片个数大致就是均匀的了。

2.4 异常处理

问题:

如果你曾多次启动,启动失败,可能是监听服务个别端口 already in use。

下面是我多次启动nebula,再次启动后导致的端口已被占用问题。

graph服务 中某一端口被占用问题:

image-20220624094007448

meta服务 中某一端口被占用问题:

image-20220624094123288

解决:

查找端口对应的pid

lsof -i:9560

然后kill 掉对应的pid

image-20220624094439291

最后重新启动相应服务即可。

2.5 设置nebula的systemd服务

集群结点都需要设置

进入 `systemd脚本` 目录
cd /usr/lib/systemd/system/

创建 nebula的metad服务

vim nebula-metad.service

添加以下内容(以下路径为rpm默认安装路径,按需更改):

[Unit]
Description=Nebula Graph Metad Service
After=network.target

[Service]
Type=forking
Restart=on-failure
RestartSec=5s
PIDFile=/usr/local/nebula/pids/nebula-metad.pid
ExecStart=/usr/local/nebula/scripts/nebula-metad.service start
ExecReload=/usr/local/nebula/scripts/nebula-metad.service restart
ExecStop=/usr/local/nebula/scripts/nebula-metad.service stop
PrivateTmp=true

[Install]
WantedBy=multi-user.target

创建 nebula的storaged服务

vim nebula-storaged.service

添加以下内容(以下路径为rpm默认安装路径,按需更改):

[Unit]
Description=Nebula Graph Storaged Service
After=network.target

[Service]
Type=forking
Restart=on-failure
RestartSec=5s
PIDFile=/usr/local/nebula/pids/nebula-storaged.pid
ExecStart=/usr/local/nebula/scripts/nebula-storaged.service start
ExecReload=/usr/local/nebula/scripts/nebula-storaged.service restart
ExecStop=/usr/local/nebula/scripts/nebula-storaged.service stop
PrivateTmp=true

[Install]
WantedBy=multi-user.target

创建 nebula的graphd服务

vim nebula-graphd.service

添加以下内容(以下路径为rpm默认安装路径,按需更改):

[Unit]
Description=Nebula Graph Graphd Service
After=network.target

[Service]
Type=forking
Restart=on-failure
RestartSec=5s
PIDFile=/usr/local/nebula/pids/nebula-graphd.pid
ExecStart=/usr/local/nebula/scripts/nebula-graphd.service start
ExecReload=/usr/local/nebula/scripts/nebula-graphd.service restart
ExecStop=/usr/local/nebula/scripts/nebula-graphd.service stop
PrivateTmp=true

[Install]
WantedBy=multi-user.target

重载配置文件

systemctl daemon-reload

如果服务未启动,可使用 systemctl 启动

systemctl start nebula-metad
systemctl start nebula-storaged
systemctl start nebula-graphd

设置服务的开机启动

systemctl enable nebula-metad
systemctl enable nebula-storaged
systemctl enable nebula-graphd

3 客户端nebula-console安装

3.1 客户端安装

默认情况下Nebula Graph默认的发型包没有提供客户端,官方在另外项目单独维护一个纯命令行的客户端即nebula-console,仓库地址为: https://github.com/vesoft-inc/nebula-console,可以进入 https://github.com/vesoft-inc/nebula-console/releases下载编译好的版本,比如说这里下载的二进制文件是:nebula-console-linux-amd64-v3.0.0, 直接放到nebula的bin目录下即可:

mv nebula-console-linux-amd64-v3.0.0 /usr/local/nebula/bin/nebula-console
chmod 755 /usr/local/nebula/bin/nebula-console

然后可以用nebula-console连接任意一台机器上的nebula graph服务测试:

# 默认没开启认证 密码任意指定即可 没修改配置文件的情况下
nebula-console -addr 127.0.0.0 -port 9669 -u root -p 123
# 若安装 Nebula Graph 时在 nebula-graphd.conf 中开启了认证,默认情况下root的密码是nebula,配置文件中修改了ip连接时要指定配置文件中设置的ip
nebula-console -addr 127.0.0.1 -port 9669 -u root -p nebula

image-20220624101849674

3.2 添加存储服务

进入客户端后 执行以下命令,添加存储服务

add hosts 192.168.0.220:9779,192.168.0.221:9779,192.168.0.222:9779

进入了命名行的操作界面,可以查看集群机器状态

SHOW HOSTS

image-20220624133328526

可以查看所有的空间

SHOW SPACES

image-20220624150641887

可以手动切换空间

USE nba

退出nebula console可以执行

quit

3.3 异常处理

使用 add hosts 命令后 ,等待了一段时间,或者重启了 nebula-storaged.service restart 存储服务,已经爆红。

又仔细检查了 metad 和 storaged 的配置文件发现并没有什么错误。

查看metad的info日志,发现以下信息。

image-20220624133536143

查看storaged的信息,显示心跳失败。

解决:

关闭服务:nebula.service stop all

删除文件cluster.id: rm -f /usr/local/nebula/cluster.id

重新启动:nebula.service start all

此时如果你之前已经 add hosts 存储服务,现在 nebula.service status all 应该就正常了。

4 扩展安装 Nebula Graph Studio

Studio部署前提条件

在部署 tar 包安装的 Studio 之前,用户需要确认以下信息:

  • Nebula Graph 服务已经部署并启动。使用的 Linux 发行版为 CentOS ,已安装 lsof。

  • Studio 目前仅支持 x86_64 架构。

  • Studio 上传数据仅支持上传无表头的 CSV 文件,但是,单个文件大小及保存时间不受限制,而且数据总量以本地存储容量为准。

  • 确保在安装开始前,以下端口处于未被使用状态。

    端口号 说明
    7001 Studio 提供的 web 服务

4.1 部署 Studio

下载nebula-Graph 对应的Studio安装包,

Studio 版本 Nebula Graph 版本
1.x 1.x
2.x 2.0 & 2.0.1
3.0.0 2.5.x
3.1.x 2.6.x
3.2.x 3.0.0
3.3.2 3.1.0

下载链接:https://oss-cdn.nebula-graph.com.cn/nebula-graph-studio/3.3.2/nebula-graph-studio-3.3.2.x86_64.rpm

rpm安装Studio安装包

rpm -ivh nebula-graph-studio-3.3.2.x86_64.rpm

也可以使用以下命令安装到指定路径:

rpm -i nebula-graph-studio-3.3.2.x86_64.rpm --prefix=<path> 

rpm安装会帮你自启动,

当屏幕返回以下信息时,表示 PRM 版 Studio 已经成功启动。也可查看7001端口是否监听。

Nebula Studio has been installed.
Created symlink from /etc/systemd/system/multi-user.target.wants/nebula-graph-studio.service to /usr/lib/systemd/system/nebula-graph-studio.service.
Nebula Studio started automatically.

启动成功后,在浏览器地址栏输入 http://192.168.0.221:7001。ip地址对应的是安装studio的机器ip。

如果在浏览器窗口中能看到以下登录界面,表示已经成功部署并启动 Studio。

输入想要连接的nebula-graph的Host和用户名密码,例如:Host:192.168.0.221:9669 Username:root Password:Nebula

image-20220624140603962

连接后会可对图空间进行选择,在 Nebula Console 数据nGql语句,点击 Run 按钮会在下方输出结果

image-20220624142007043

4.2 卸载

用户可以使用以下的命令卸载 Studio。

rpm -e nebula-graph-studio-3.3.2.x86_64

当屏幕返回以下信息时,表示 PRM 版 Studio 已经卸载。

Nebula Studio removed, bye~

4.4异常处理

如果在安装过程中自动启动失败或是需要手动启动或停止服务,请使用以下命令:

  • 手动启动服务

    /usr/local/nebula-graph-studio/scripts/rpm/start.sh
    
  • 手动停止服务

    /usr/local/nebula-graph-studio/scripts/rpm/stop.sh
    

如果启动服务时遇到报错报错 ERROR: bind EADDRINUSE 0.0.0.0:7001,用户可以通过以下命令查看端口 7001 是否被占用。

$ lsof -i:7001

如果端口被占用,且无法结束该端口上进程,用户可以通过以下命令修改 Studio 服务启动端口,并重新启动服务。

//修改 studio 服务启动端口
$ vi config/example-config.yaml

//修改
web:
#  task_id_path:
#  upload_dir:
#  tasks_dir:
#  sqlitedb_file_path:
#  ip:
  port: 7001 // 修改这个端口号,改成任意一个当前可用的即可

//重启服务
$ systemctl restart nebula-graph-studio.service