首页
归档
时光轴
推荐
Cloud
图床
导航
Search
1
Deploy OpenStack offline based on Kolla
745 阅读
2
openstact 基础环境安装 (手动版)
689 阅读
3
Mariadb 主从复制&读写分离
650 阅读
4
Typecho 1.2.0 部署
643 阅读
5
FusionCompute8.0 体验
575 阅读
Python
Linux
随笔
mysql
openstack
Search
标签搜索
linux
Pike
python
爬虫
openstack
mysql
Essay
Ansible
docker
Zabbix
kolla
Internet
Redis
1+X
Hyper-V
jenkins
Kickstart
自动化
sh
pxe
Acha
累计撰写
77
篇文章
累计收到
1
条评论
首页
栏目
Python
Linux
随笔
mysql
openstack
页面
归档
时光轴
推荐
Cloud
图床
导航
搜索到
18
篇与
的结果
2023-05-06
准备 kolla 离线部署包
分析需要的文件 RPM包 容器镜像 pip 包 流程 1、配置完yum源,设置开启缓存。手动部署完,制作 repo 源 2、手动部署完,将所有容器打包 3、手动部署完,统计 python模块包 写到 requirements.txt,下载收集并编写安装脚本。 容器镜像 全部导出 docker save `docker images --format "{{.Repository}}:{{.Tag}}"` | gzip > kolla_centos_train_min.tar.gz 导入 docker load -i kolla_centos_train_min.tar.gz rpm 软件包 [root@kolla ~]# mkdir /repo [root@kolla ~]# cat /etc/yum.conf [main] cachedir=/repo keepcache=1 ... [root@kolla ~]# yum install -y createrepo [root@kolla ~]# cd /repo [root@kolla repo]# mkdir -p kolla_centos_train_rpm/Packages [root@kolla repo]# cp -ra */packages/* kolla_centos_train_rpm/Packages [root@kolla repo]# cd kolla_centos_train_rpm [root@kolla kolla_centos_train_rpm]# createrepo ./ [root@kolla kolla_centos_train_rpm]# ls Packages repodata [root@kolla kolla_centos_train_rpm]# cd /repo [root@kolla repo]# tar cfz kolla_centos_train_rpm.tar.gz kolla_centos_train_rpm pip 依赖包 [root@kolla kolla_centos_train_whl]# cat requirements.txt setuptools==22.0.5 pip==20.3.4 wheel kolla-ansible==9.1.0 [root@kolla kolla_centos_train_whl]# mkdir packages [root@kolla kolla_centos_train_whl]# pip download -d packages -r requirements.txt [root@kolla kolla_centos_train_whl]# cat install.sh #!/bin/bash pip install --no-index --find-links=./packages/ setuptools==22.0.5 pip install --no-index --find-links=./packages/ pip==20.3.4 pip install --no-index --find-links=./packages/ wheel pip install --no-index --find-links=./packages/ kolla-ansible==9.1.0 --ignore-installed PyYAML [root@kolla ~]# tar cfz kolla_centos_train_whl.tar.gz kolla_centos_train_whl/
2023年05月06日
328 阅读
0 评论
0 点赞
2023-05-06
Deploy OpenStack offline based on Kolla
Deploy OpenStack offline based on Kolla 系统:CentOS Linux release 7.9.2009 (Core) 规格:4C8G 存储 50G 系统盘 20G 数据盘(cinder) 网络规划 ens33 仅主机(管理网、ip:192.168.100.10/24) ens34 NAT(业务网、CICR:192.168.10.0/24 、gw:192.168.10.2) VIP 192.168.100.100 初始化环境 1、修改主机名 hosts [root@kolla ~]# hostnamectl set-hostname kolla rabbitmq 可能需要能够进行主机名解析 2、配置网络 [root@kolla ~]# cat > /etc/sysconfig/network-scripts/ifcfg-ens34 <<EOF NAME=ens34 DEVICE=ens34 TYPE=Ethernet ONBOOT="yes" BOOTPROTO="none" EOF [root@kolla ~]# nmcli con reload && nmcli con up ens34 准备两块网卡、ens34 为 业务网络 3、上传软件包 99cloud_skyline.tar.gz // skyline 容器镜像 kolla_centos_train_min.tar.gz // 容器镜像(最小化) kolla_centos_train_rpm.tar.gz // 依赖软件包 kolla_centos_train_whl.tar.gz // pip依赖包 4、创建 lvm [root@kolla ~]# pvcreate /dev/sdb [root@kolla ~]# vgcreate cinder-volumes /dev/sdb 注:卷组名为 cinder_volume_group 参数 5、配置 源 配置 yum [root@kolla ~]# mkdir /etc/yum.repos.d/bak [root@kolla ~]# mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak [root@kolla ~]# tar xf kolla_centos_train_rpm.tar.gz -C /opt/ [root@kolla ~]# cat > /etc/yum.repos.d/local.repo << EOF [kolla] name=kolla - acha baseurl=file:///opt/kolla_centos_train_rpm/ gpgcheck=0 enabled=1 EOF 安装 依赖 [root@kolla ~]# yum install -y python-devel libffi-devel gcc openssl-devel \ libselinux-python python2-pip python-pbr ansible 安装常用软件 [root@kolla ~]# yum install -y vim unzip net-tools lrzsz tree bash-completion 部署环境 1、安装 kolla-ansible [root@kolla ~]# tar xf kolla_centos_train_whl.tar.gz [root@kolla ~]# cd kolla_centos_train_whl [root@kolla kolla_centos_train_whl]# ./install.sh 2、配置 准备配置文件 [root@kolla kolla_centos_train_whl]# mkdir -p /etc/kolla [root@kolla kolla_centos_train_whl]# cd /etc/kolla [root@kolla kolla]# chown $USER:$USER /etc/kolla [root@kolla kolla]# cp -r /usr/share/kolla-ansible/etc_examples/kolla/* /etc/kolla [root@kolla kolla]# cp /usr/share/kolla-ansible/ansible/inventory/* /etc/kolla 修改ansible配置文件 [root@kolla kolla]# cat << EOF | sed -i '/^\[defaults\]$/ r /dev/stdin' /etc/ansible/ansible.cfg host_key_checking=False pipelining=True forks=100 EOF 忽略 DeprecationWarning 提示 cat -n /usr/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py | tail -n +41 | head -n 5 41 try: 42 #with warnings.catch_warnings(): 43 # warnings.simplefilter("ignore", DeprecationWarning) 44 warnings.filterwarnings("ignore") 45 from cryptography.exceptions import InvalidSignature 检查inventory [root@kolla /etc/kolla]# ansible -i all-in-one all -m ping 生成密码 [root@kolla kolla]# kolla-genpwd 修改 keystone_admin_password [root@kolla kolla]# sed -i 's#keystone_admin_password:.*#keystone_admin_password: kolla#g' /etc/kolla/passwords.yml [root@kolla kolla]# cat /etc/kolla/passwords.yml | grep keystone_admin_password keystone_admin_password: kolla 修改全局配置文件globals.yml(控制安装、配置组件) [root@kolla kolla]# cp /etc/kolla/globals.yml{,.bak} [root@kolla kolla]# cat >> /etc/kolla/globals.yml <<EOF # Kolla options kolla_base_distro: "centos" kolla_install_type: "binary" openstack_release: "train" kolla_internal_vip_address: "192.168.100.100" # Neutron - Networking Options network_interface: "ens33" neutron_external_interface: "ens34" neutron_plugin_agent: "openvswitch" enable_neutron_provider_networks: "yes" # OpenStack services enable_cinder: "yes" enable_cinder_backend_lvm: "yes" EOF 参数 说明 kolla_base_distro 容器镜像的 linux 发行版(ubuntu、centos、debain) kolla_install_type 组件构建类型(binary、source) openstack_release openstack 版本(train) kolla_internal_vip_address 高可用VIP(管理网地址) docker_registry Docker 镜像仓库 docker_namespace 镜像仓库所在命名空间(dockerhub 为 kolla) network_interface 管理网卡 neutron_external_interface 业务网卡 neutron_plugin_agent 网络插件(openvswitch,linuxbridge) enable_neutron_provider_networks 启用业务网络 enable_cinder 启用 cinder enable_cinder_backend_lvm 指定 cinder 后端存储(lvm) 3、部署 # 不启用 docker 源 [root@kolla kolla]# sed -i.bak "s/enable_docker_repo: true/enable_docker_repo: false/g" \ /usr/share/kolla-ansible/ansible/roles/baremetal/defaults/main.yml 忽略 docker版本 sed -i "9a \ \ ignore_errors: yes" \ /usr/share/kolla-ansible/ansible/roles/prechecks/tasks/service_checks.yml # 预配置,安装docker、docker sdk、关闭防火墙、配置时间同步等 [root@kolla kolla]# kolla-ansible -i ./all-in-one bootstrap-servers # 部署前检查环境 [root@kolla kolla]# kolla-ansible -i ./all-in-one prechecks # 导入镜像 [root@kolla kolla]# docker load -i /root/kolla_centos_train_min.tar.gz # 执行实际部署,运行对应组件容器 [root@kolla kolla]# kolla-ansible -i ./all-in-one deploy # 生成openrc文件 [root@kolla kolla]# kolla-ansible post-deploy 4、检查 [root@kolla kolla]# docker ps -a | grep -v Up [root@kolla kolla]# docker ps -a | wc -l 38 个容器 [root@kolla kolla]# lvs | grep cinder 安装 OpenStack 客户端 安装openstack客户端 [root@kolla kolla]# yum install -y python-openstackclient 运行 cirros 实例 [root@kolla kolla]# mkdir -p /opt/cache/files/ [root@kolla kolla]# mv cirros-0.4.0-x86_64-disk.img /opt/cache/files/ # 定义init-runonce示例脚本外部网络配置 [root@kolla kolla]# vim /usr/share/kolla-ansible/init-runonce EXT_NET_CIDR=${EXT_NET_CIDR:-'192.168.10.0/24'} EXT_NET_RANGE=${EXT_NET_RANGE:-'start=192.168.10.50,end=192.168.10.200'} EXT_NET_GATEWAY=${EXT_NET_GATEWAY:-'192.168.10.2'} # 执行脚本,上传镜像到glance,创建内部网络、外部网络、flavor、ssh key,并运行一个实例 [root@kolla kolla]# source /etc/kolla/admin-openrc.sh [root@kolla kolla]# /usr/share/kolla-ansible/init-runonce [root@kolla kolla]# openstack server create \ --image cirros \ --flavor m1.tiny \ --key-name mykey \ --network demo-net \ demo1 部署 Skyline [root@kolla ~]# database_password=`awk '/^database_password/ {print $2}' /etc/kolla/passwords.yml` [root@kolla ~]# docker exec -it mariadb mysql -uroot -p$database_password \ -e "CREATE DATABASE skyline DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; GRANT ALL PRIVILEGES ON skyline.* TO 'skyline'@'%' IDENTIFIED BY '000000';" [root@kolla ~]# docker exec -it mariadb mysql -uroot -p$database_password -e "show databases" | grep skyline [root@kolla ~]# docker exec -it mariadb mysql -uroot -p$database_password -e "select user,host from mysql.user;" | grep skyline [root@kolla ~]# source /etc/kolla/admin-openrc.sh [root@kolla ~]# openstack user create --domain default --password 000000 skyline +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | f437975f5b3e424382e4ac939274a92b | | name | skyline | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@kolla ~]# openstack role add --project service --user skyline admin [root@kolla ~]# mkdir -p /etc/skyline /var/log/skyline /var/lib/skyline [root@kolla ~]# cat > /etc/skyline/skyline.yaml <<EOF default: database_url: 'mysql://skyline:000000@192.168.100.100:3306/skyline' prometheus_endpoint: 'http://localhost:9091' openstack: keystone_url: 'http://192.168.100.100:5000/v3' default_region: RegionOne interface_type: public system_user_name: 'skyline' system_user_password: '000000' EOF [root@kolla ~]# docker load -i /root/99cloud_skyline.tar.gz [root@kolla ~]# docker run -d --name skyline_bootstrap -e KOLLA_BOOTSTRAP="" \ -v /etc/skyline/skyline.yaml:/etc/skyline/skyline.yaml \ --net=host 99cloud/skyline:latest [root@kolla ~]# docker logs skyline_bootstrap + echo '/usr/local/bin/gunicorn -c /etc/skyline/gunicorn.py skyline_apiserver.main:app' + mapfile -t CMD ++ tail /run_command ++ xargs -n 1 + [[ -n 0 ]] + cd /skyline/libs/skyline-apiserver/ + make db_sync poetry run alembic upgrade head Skipping virtualenv creation, as specified in config file. /usr/local/lib/python3.8/dist-packages/pymysql/cursors.py:170: Warning: (1280, "Name 'alembic_version_pkc' ignored for PRIMARY key.") result = self._query(query) + exit 0 [root@kolla ~]# docker rm -f skyline_bootstrap [root@kolla ~]# docker run -d --name skyline \ --restart=always \ -v /etc/skyline/skyline.yaml:/etc/skyline/skyline.yaml \ --net=host \ 99cloud/skyline:latest
2023年05月06日
745 阅读
0 评论
0 点赞
2022-07-19
OpenStack-Pike 搭建之Cinder(七)
概述 OpenStack块存储服务(cinder)为虚拟机添加持久的存储,块存储提供一个基础设施为了管理卷,以及和OpenStack计算服务交互,为实例提供卷。此服务也会激活管理卷的快照和卷类型的功能。 块存储服务通常包含下列组件: cinder-api 接受API请求,并将其路由到cinder-volume执行。 cinder-volume 与块存储服务和例如cinder-scheduler的进程进行直接交互。它也可以与这些进程通过一个消息队列进行交互。cinder-volume服务响应送到块存储服务的读写请求来维持状态。它也可以和多种存储提供者在驱动架构下进行交互。 cinder-scheduler守护进程 选择最优存储提供节点来创建卷。其与nova-scheduler组件类似。 cinder-backup daemon cinder-backup服务提供任何种类备份卷到一个备份存储提供者。就像cinder-volume服务,它与多种存储提供者在驱动架构下进行交互。 消息队列 在块存储的进程之间路由信息。 安装并配置控制节点 先决条件 1、创建数据库 root 用户连接到数据库服务器 mysql -u root -p000000 创建 cinder 数据库 CREATE DATABASE cinder; 允许 cinder 数据库的cinder用户访问权限: GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '000000'; GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '000000'; 2、获取 admin 凭证 . admin-openrc 3、创建服务证书 创建一个 cinder 用户 openstack user create --domain default --password 000000 cinder 添加 admin 角色到 cinder 用户上 openstack role add --project service --user cinder admin 创建服务实体 openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 创建服务API openstack endpoint create --region RegionOne \ volumev2 public http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev2 internal http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev2 admin http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev3 public http://controller:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev3 internal http://controller:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev3 admin http://controller:8776/v3/%\(project_id\)s {collapse} {collapse-item label="CMD"} [root@openstack ~]# mysql -u root -p000000 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 379 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE cinder; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> exit Bye [root@openstack ~]# . admin-openrc [root@openstack ~]# openstack user create --domain default --password 000000 cinder +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 15c1c62c21f543d984563abe5c063726 | | name | cinder | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@openstack ~]# openstack role add --project service --user cinder admin [root@openstack ~]# openstack service create --name cinderv2 \ > --description "OpenStack Block Storage" volumev2 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | 052ab471e8ed4e5ca687bd73537935b5 | | name | cinderv2 | | type | volumev2 | +-------------+----------------------------------+ [root@openstack ~]# openstack service create --name cinderv3 \ > --description "OpenStack Block Storage" volumev3 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | af52c95327614d1e9fb70286fcb552ea | | name | cinderv3 | | type | volumev3 | +-------------+----------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev2 public http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 2c3e428aa796442696e7f5919175c1e2 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 052ab471e8ed4e5ca687bd73537935b5 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev2 internal http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | c25695b19d1b4e33a9ea2d4c023b8732 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 052ab471e8ed4e5ca687bd73537935b5 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev2 admin http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 2928643fb2ac4bd99aa8cc795b55f7e1 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 052ab471e8ed4e5ca687bd73537935b5 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev3 public http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | a63b291e2ed4450e91ee574e4f9b4a7a | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | af52c95327614d1e9fb70286fcb552ea | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev3 internal http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | f47bd12c58af41298cdc7c5fe76cb8d4 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | af52c95327614d1e9fb70286fcb552ea | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev3 admin http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 6dd9f8c300754f32abecf2b77e241d5d | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | af52c95327614d1e9fb70286fcb552ea | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ {/collapse-item} {/collapse} 安装并配置组件 1、安装软件包 yum install -y openstack-cinder 2、配置 cinder.conf # sed -i.bak '/^#/d;/^$/d' /etc/cinder/cinder.conf # vim /etc/cinder/cinder.conf [database] # 配置数据库访问 connection = mysql+pymysql://cinder:000000@controller/cinder [DEFAULT] # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone # 控制器节点的管理接口 IP 地址 my_ip = 178.120.2.100 [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = 000000 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/cinder/tmp 3、同步数据库 # su -s /bin/sh -c "cinder-manage db sync" cinder 配置计算节点使用块设备存储 配置 nova.conf # vim /etc/nova/nova.conf [cinder] os_region_name = RegionOne 完成安装 1、重启计算API服务 systemctl restart openstack-nova-api.service 2、启动设备块存储服务并设置开机自启 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service 安装并配置存储节点 先决条件 1、安装支持工具包 yum install -y lvm2 device-mapper-persistent-data systemctl enable lvm2-lvmetad.service && systemctl start lvm2-lvmetad.service 2、创建LVM物理卷 pvcreate /dev/sdb 3、创建LVM卷组 vgcreate cinder-volumes /dev/sdb 4、添加过滤器 # vim /etc/lvm/lvm.conf filter = [ "a/vdb/", "r/.*/"] {collapse} {collapse-item label="CMD"} [root@storage_node ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom vda 252:0 0 30G 0 disk ├─vda1 252:1 0 1G 0 part /boot └─vda2 252:2 0 29G 0 part └─centos-root 253:0 0 29G 0 lvm / vdb 252:16 0 100G 0 disk vdc 252:32 0 100G 0 disk [root@storage_node ~]# pvcreate /dev/vdb Physical volume "/dev/vdb" successfully created. [root@storage_node ~]# vgcreate cinder-volumes /dev/vdb Volume group "cinder-volumes" successfully create {/collapse-item} {/collapse} 安装并配置组件 1、安装软件包 yum install -y openstack-cinder targetcli python-keystone 2、配置 cinder.conf # sed -i.bak '/^#/d;/^$/d' /etc/cinder/cinder.conf # vim /etc/cinder/cinder.conf [database] # 配置数据库访问 connection = mysql+pymysql://cinder:000000@controller/cinder [DEFAULT] # RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone # 存储节点上管理网络接口的 IP 地址 my_ip = 178.120.2.192 # 启用 LVM 后端 enabled_backends = lvm # 配置镜像服务 API 的位置 glance_api_servers = http://controller:9292 [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = 000000 [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/cinder/tmp 完成安装 启动存储卷服务,设置开机自启 systemctl enable openstack-cinder-volume.service target.service systemctl start openstack-cinder-volume.service target.service
2022年07月19日
204 阅读
0 评论
0 点赞
2022-07-14
OpenStack-Pike 搭建之Dashboard(六)
Dashboard 安装和配置组件 1、安装软件包 yum install -y openstack-dashboard 2、配置 local_settings cp -a /etc/openstack-dashboard/local_settings /root/local_settings vim /etc/openstack-dashboard/local_settings # 仪表板在控制节点 OPENSTACK_HOST = "controller" # 允许主机访问仪表板 ALLOWED_HOSTS = ['*'] # 配置会话存储服务 memcached SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } # 启用身份 API 版本 3 OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST # 启用对域的支持 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True # 配置 API 版本 OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } # 配置为通过仪表板创建的用户的默认域:Default OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" # 配置为通过仪表板创建的用户的默认角色:user OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" # 如果选择网络选项 1,请禁用对第 3 层网络服务的支持 OPENSTACK_NEUTRON_NETWORK = { ... 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, } {collapse} {collapse-item label="查看执行过程"} 配置 local_settings [root@controller ~]# cat /etc/openstack-dashboard/local_settings import os from django.utils.translation import ugettext_lazy as _ from openstack_dashboard.settings import HORIZON_CONFIG DEBUG = False WEBROOT = '/dashboard/' ALLOWED_HOSTS = ['*'] LOCAL_PATH = '/tmp' SECRET_KEY='04f3ac91f6f48932c88a' SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' OPENSTACK_HOST = "controller" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" OPENSTACK_KEYSTONE_BACKEND = { 'name': 'native', 'can_edit_user': True, 'can_edit_group': True, 'can_edit_project': True, 'can_edit_domain': True, 'can_edit_role': True, } OPENSTACK_HYPERVISOR_FEATURES = { 'can_set_mount_point': False, 'can_set_password': False, 'requires_keypair': False, 'enable_quotas': True } OPENSTACK_CINDER_FEATURES = { 'enable_backup': False, } OPENSTACK_NEUTRON_NETWORK = { 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, 'supported_vnic_types': ['*'], 'physical_networks': [], } OPENSTACK_HEAT_STACK = { 'enable_user_pass': True, } IMAGE_CUSTOM_PROPERTY_TITLES = { "architecture": _("Architecture"), "kernel_id": _("Kernel ID"), "ramdisk_id": _("Ramdisk ID"), "image_state": _("Euca2ools state"), "project_id": _("Project ID"), "image_type": _("Image Type"), } IMAGE_RESERVED_CUSTOM_PROPERTIES = [] API_RESULT_LIMIT = 1000 API_RESULT_PAGE_SIZE = 20 SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024 INSTANCE_LOG_LENGTH = 35 DROPDOWN_MAX_ITEMS = 30 TIME_ZONE = "UTC" POLICY_FILES_PATH = '/etc/openstack-dashboard' LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'console': { 'format': '%(levelname)s %(name)s %(message)s' }, 'operation': { 'format': '%(message)s' }, }, 'handlers': { 'null': { 'level': 'DEBUG', 'class': 'logging.NullHandler', }, 'console': { 'level': 'INFO', 'class': 'logging.StreamHandler', 'formatter': 'console', }, 'operation': { 'level': 'INFO', 'class': 'logging.StreamHandler', 'formatter': 'operation', }, }, 'loggers': { 'django.db.backends': { 'handlers': ['null'], 'propagate': False, }, 'requests': { 'handlers': ['null'], 'propagate': False, }, 'horizon': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'horizon.operation_log': { 'handlers': ['operation'], 'level': 'INFO', 'propagate': False, }, 'openstack_dashboard': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'novaclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'cinderclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'keystoneclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'glanceclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'neutronclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'heatclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'swiftclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'openstack_auth': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'nose.plugins.manager': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'django': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'iso8601': { 'handlers': ['null'], 'propagate': False, }, 'scss': { 'handlers': ['null'], 'propagate': False, }, }, } SECURITY_GROUP_RULES = { 'all_tcp': { 'name': _('All TCP'), 'ip_protocol': 'tcp', 'from_port': '1', 'to_port': '65535', }, 'all_udp': { 'name': _('All UDP'), 'ip_protocol': 'udp', 'from_port': '1', 'to_port': '65535', }, 'all_icmp': { 'name': _('All ICMP'), 'ip_protocol': 'icmp', 'from_port': '-1', 'to_port': '-1', }, 'ssh': { 'name': 'SSH', 'ip_protocol': 'tcp', 'from_port': '22', 'to_port': '22', }, 'smtp': { 'name': 'SMTP', 'ip_protocol': 'tcp', 'from_port': '25', 'to_port': '25', }, 'dns': { 'name': 'DNS', 'ip_protocol': 'tcp', 'from_port': '53', 'to_port': '53', }, 'http': { 'name': 'HTTP', 'ip_protocol': 'tcp', 'from_port': '80', 'to_port': '80', }, 'pop3': { 'name': 'POP3', 'ip_protocol': 'tcp', 'from_port': '110', 'to_port': '110', }, 'imap': { 'name': 'IMAP', 'ip_protocol': 'tcp', 'from_port': '143', 'to_port': '143', }, 'ldap': { 'name': 'LDAP', 'ip_protocol': 'tcp', 'from_port': '389', 'to_port': '389', }, 'https': { 'name': 'HTTPS', 'ip_protocol': 'tcp', 'from_port': '443', 'to_port': '443', }, 'smtps': { 'name': 'SMTPS', 'ip_protocol': 'tcp', 'from_port': '465', 'to_port': '465', }, 'imaps': { 'name': 'IMAPS', 'ip_protocol': 'tcp', 'from_port': '993', 'to_port': '993', }, 'pop3s': { 'name': 'POP3S', 'ip_protocol': 'tcp', 'from_port': '995', 'to_port': '995', }, 'ms_sql': { 'name': 'MS SQL', 'ip_protocol': 'tcp', 'from_port': '1433', 'to_port': '1433', }, 'mysql': { 'name': 'MYSQL', 'ip_protocol': 'tcp', 'from_port': '3306', 'to_port': '3306', }, 'rdp': { 'name': 'RDP', 'ip_protocol': 'tcp', 'from_port': '3389', 'to_port': '3389', }, } REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES', 'LAUNCH_INSTANCE_DEFAULTS', 'OPENSTACK_IMAGE_FORMATS', 'OPENSTACK_KEYSTONE_DEFAULT_DOMAIN', 'CREATE_IMAGE_DEFAULTS'] ALLOWED_PRIVATE_SUBNET_CIDR = {'ipv4': [], 'ipv6': []} {/collapse-item} {/collapse} 完成安装 重启 Web服务器 和 会话存储服务 systemctl restart httpd.service memcached.service
2022年07月14日
205 阅读
0 评论
0 点赞
2022-07-14
OpenStack-Pike 搭建之Neutron(五)
Neutron 安装和配置 控制节点 前置条件 1、创建数据库并授权 使用 root 用户登录数据库 mysql -u root -p000000 创建 neutron 数据库 CREATE DATABASE neutron; neutron 用户对 neutron数据库有所有权限 GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY '000000'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY '000000'; 2、获取 admin 凭证 . admin-openrc 3、创建服务凭证 创建 neutron 用户 openstack user create --domain default --password 000000 neutron 将 service项目 中的 neutron用户 设置为 admin角色 openstack role add --project service --user neutron admin 创建 neutron 服务实体 openstack service create --name neutron --description "OpenStack Networking" network 4、创建 网络服务 API端点 openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 {collapse} {collapse-item label="查看执行过程"} 前置条件 [root@controller ~]# mysql -u root -p000000 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 68 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE neutron; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> exit Bye [root@controller ~]# . admin-openrc [root@controller ~]# openstack user create --domain default --password-prompt neutron User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | bd11a70055634b8996bdd7096ea91a60 | | name | neutron | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]# openstack role add --project service --user neutron admin [root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Networking | | enabled | True | | id | 3f33133eae714fa492723f3617e8705f | | name | neutron | | type | network | +-------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 4df23df9efe547ea88b5ec0e01201c4a | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 3f33133eae714fa492723f3617e8705f | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 0412574e5e3f4b3ca5e7e18f753d7e80 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 3f33133eae714fa492723f3617e8705f | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 2251b821a7484bb0a5eb65697af351f6 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 3f33133eae714fa492723f3617e8705f | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ {/collapse-item} {/collapse} 配置网络选项(Falt 网络) [ 配置参考] :https://docs.openstack.org/neutron/latest/configuration/config.html 安装组件 yum install -y openstack-neutron openstack-neutron-ml2 \ openstack-neutron-linuxbridge ebtables 配置服务组件 配置 neutron.conf # sed -i.bak '/^#/d;/^$/d' /etc/neutron/neutron.conf # vim /etc/neutron/neutron.conf [database] # 配置数据库访问 connection = mysql+pymysql://neutron:000000@controller/neutron [DEFAULT] # 启用 ML2插件并禁用其他插件 core_plugin = ml2 service_plugins = # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone # 配置 Networking 以通知 Compute 网络拓扑更改 notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 [nova] # 配置 Networking 以通知 Compute 网络拓扑更改 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 000000 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/neutron/tmp 配置 ML2插件 配置 ml2_conf.ini # sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/ml2_conf.ini # vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] # 启用平面和 VLAN 网络 type_drivers = flat,vlan # 禁用自助服务网络 tenant_network_types = # 启用 Linux 桥接机制 mechanism_drivers = linuxbridge # 启用端口安全扩展驱动程序 extension_drivers = port_security [securitygroup] # 启用 ipset 以提高安全组规则的效率 enable_ipset = true 配置 Linux网桥代理 配置 linuxbridge_agent.ini # sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini # vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] # 将Flat网络映射到物理网络接口 physical_interface_mappings = provider:eth0 [vxlan] # 禁用 VXLAN 覆盖网络 enable_vxlan = false [securitygroup] # 启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 配置 DHCP代理 配置 dhcp_agent.ini # sed -i.bak '/^#/d;/^$/d' /etc/neutron/dhcp_agent.ini # vim /etc/neutron/dhcp_agent.ini [DEFAULT] # 配置 Linux 网桥接口驱动程序、Dnsmasq DHCP 驱动程序,并启用隔离元数据 interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true {collapse} {collapse-item label="查看执行过程"} 配置服务组件 [root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 \ > openstack-neutron-linuxbridge ebtables [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/neutron.conf [root@controller ~]# vim /etc/neutron/neutron.conf [DEFAULT] # 启用 ML2插件并禁用其他插件 core_plugin = ml2 service_plugins = # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone # 配置 Networking 以通知 Compute 网络拓扑更改 notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [agent] [cors] [database] # 配置数据库访问 connection = mysql+pymysql://neutron:000000@controller/neutron [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 [matchmaker_redis] [nova] # 配置 Networking 以通知 Compute 网络拓扑更改 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 000000 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [quotas] [ssl] [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/ml2_conf.ini [root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [root@controller ~]# cat /etc/neutron/plugins/ml2/ml2_conf.ini [DEFAULT] [l2pop] [ml2] # 启用平面和 VLAN 网络 type_drivers = flat,vlan # 禁用自助服务网络 tenant_network_types = # 启用 Linux 桥接机制 mechanism_drivers = linuxbridge # 启用端口安全扩展驱动程序 extension_drivers = port_security [ml2_type_flat] [ml2_type_geneve] [ml2_type_gre] [ml2_type_vlan] [ml2_type_vxlan] [securitygroup] # 启用 ipset 以提高安全组规则的效率 enable_ipset = true [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@controller ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [agent] [linux_bridge] # 将Flat网络映射到物理网络接口 physical_interface_mappings = provider:eth0 [securitygroup] # 启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] # 禁用 VXLAN 覆盖网络 enable_vxlan = false [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/dhcp_agent.ini [root@controller ~]# vim /etc/neutron/dhcp_agent.ini [root@controller ~]# cat /etc/neutron/dhcp_agent.ini [DEFAULT] # 配置 Linux 网桥接口驱动程序、Dnsmasq DHCP 驱动程序,并启用隔离元数据 interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true [agent] [ovs] {/collapse-item} {/collapse} 配置元数据代理 配置 metadata_agent.ini sed -i.bak '/^#/d;/^$/d' /etc/neutron/metadata_agent.ini vim /etc/neutron/metadata_agent.ini [DEFAULT] # 配置元数据主机和共享密钥 nova_metadata_host = controller metadata_proxy_shared_secret = 000000 {collapse} {collapse-item label="查看执行过程"} 配置元数据代理 [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/metadata_agent.ini [root@controller ~]# vim /etc/neutron/metadata_agent.ini [root@controller ~]# cat /etc/neutron/metadata_agent.ini [DEFAULT] # 配置元数据主机和共享密钥 nova_metadata_host = controller metadata_proxy_shared_secret = 000000 [agent] [cache] {/collapse-item} {/collapse} 配置计算服务使用网络服务 配置 nova.conf vim /etc/nova/nova.conf [neutron] # 配置访问参数、启用元数据代理和配置密钥 url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000 service_metadata_proxy = true metadata_proxy_shared_secret = 000000 {collapse} {collapse-item label="查看执行过程"} 配置计算服务使用网络服务 [root@controller ~]# vim /etc/nova/nova.conf {/collapse-item} {/collapse} 完成安装 1、创建 plugin.ini 链接 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini 2、同步 neutron 数据库 su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron 3、重启 nova-api 服务 systemctl restart openstack-nova-api.service 4、启动网络服务设置开机自启 systemctl enable neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service systemctl start neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service 5、开启路由转发 [root@controller ~]# vim /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 [root@controller ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 {collapse} {collapse-item label="查看执行过程"} 完成安装 [root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ > --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Running upgrade for neutron ... INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> kilo, kilo_initial INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, neutrodb_ipam INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf, Initial operations in support of address scopes INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee, Flavor framework INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f, network_rbac INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773, quota_usage INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592, subnetpool hash INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7, add order to dnsnameservers INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79, address scope support in subnetpool INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d, Add availability zone INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a, add is_default to subnetpool INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25, Add standard attribute table INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee, Add network availability zone INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9, Add router availability zone INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4, Add ip_version to AddressScope INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664, Add tables and attributes to support external DNS integration INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5, add_unique_ha_router_agent_port_bindings INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f, Auto Allocated Topology - aka Get-Me-A-Network INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821, add dynamic routing model data INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4, add_bgp_dragent_model_data INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81, rbac_qos_policy INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6, Add resource_versions row to agent table INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532, tag support INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f, add_timestamp_to_base_resources INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a, Add desc to standard attr table INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b, qos dscp db addition INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73, Add support for VLAN trunking INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99, Initial no-op Liberty contract rule. INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada, network_rbac INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016, Drop legacy OVS and LB plugin tables INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3, Metaplugin removal INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d, Add missing foreign keys INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d, add geneve ml2 type driver INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297, Drop cisco monolithic tables INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c, Drop embrane plugin table INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39, standardattributes migration INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b, DVR sheduling refactoring INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050, Drop NEC plugin tables INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9, rbac_qos_policy INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada, network_rbac_external INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc, standard_desc INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53, device_owner_ha_replicate_int INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70, Rename ml2_network_segments table INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502, Add device_id index to Port INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee, provisioning_blocks.py INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048, add revisions table INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4, add dns name to portdnses INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90, Add segment_id to subnet INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4, Add segment_host_mapping table. INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426, Rename ml2_dvr_port_bindings INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524, Remove mtu column from networks. INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37, Add flavor_id to Router INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa, uniq_routerports0port_id INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf, Add support for Subnet Service Types INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4, add_qos_minimum_bandwidth_rules INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e, add standardattr to qos policies INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc, uniq_floatingips0floating_network_id0fixed_port_id0fixed_ip_addr INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d, Add ip_allocation to port INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70, add_pk_version_table INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c, extend_pk_with_host_and_add_status_to_ml2_port_binding INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c, Add data_plane_status to Port INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da, qos add direction to bw_limit_rule table INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192, add is default to qos policies INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9, logging api INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6, Add dns_domain to portdnses INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f, add mtu for networks INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a, migrate dns name from port INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad, rename tenant to project INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab, Add routerport bindings for L3 HA INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0, migrate to pluggable ipam INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62, add standardattr to qos policies INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353, Add Name and Description to the networksegments table INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586, Add binding index to RouterL3AgentBinding INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d, Remove availability ranges. OK [root@controller ~]# systemctl restart openstack-nova-api.service [root@controller ~]# systemctl enable neutron-server.service \ > neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ > neutron-metadata-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service. [root@controller ~]# systemctl start neutron-server.service \ > neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ > neutron-metadata-agent.service {/collapse-item} {/collapse} 安装和配置 计算节点 安装组件 yum install -y openstack-neutron-linuxbridge ebtables ipset 配置通用组件 配置 neutron.conf # sed -i.bak '/^#/d;/^$/d' /etc/neutron/neutron.conf # vim /etc/neutron/neutron.conf [DEFAULT] # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/neutron/tmp {collapse} {collapse-item label="查看执行过程"} 配置通用组件 [root@compute ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/neutron.conf [root@compute ~]# vim /etc/neutron/neutron.conf [root@compute ~]# cat /etc/neutron/neutron.conf [DEFAULT] # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone [agent] [cors] [database] [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 [matchmaker_redis] [nova] [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [quotas] [ssl] {/collapse-item} {/collapse} 配置网络选项(Flat网络) 配置 linuxbridge_agent.ini # sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini # vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] # 将Flat网络映射到物理网络接口 physical_interface_mappings = provider:eth0 [vxlan] # 禁用 VXLAN 覆盖网络 enable_vxlan = false [securitygroup] # 启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver {collapse} {collapse-item label="查看执行过程"} 配置网络选项(Flat网络) [root@compute ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@compute ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@compute ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [agent] [linux_bridge] # 将Flat网络映射到物理网络接口 physical_interface_mappings = provider:eth0 [securitygroup] # 启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] # 禁用 VXLAN 覆盖网络 enable_vxlan = false {/collapse-item} {/collapse} 配置计算服务使用网络服务 配置 nova.conf # vim /etc/nova/nova.conf [neutron] # 配置访问参数 url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000 {collapse} {collapse-item label="查看执行过程"} 配置计算服务使用网络服务 [root@compute ~]# vim /etc/nova/nova.conf [neutron] # 配置访问参数 url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000 {/collapse-item} {/collapse} 完成安装 1、重启计算服务 systemctl restart openstack-nova-compute.service 2、启动 网桥服务并设置开机自启 systemctl enable neutron-linuxbridge-agent.service systemctl start neutron-linuxbridge-agent.service 3、开启路由转发 [root@compute ~]# vim /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 [root@compute ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 {collapse} {collapse-item label="查看执行过程"} 完成安装 root@compute ~]# systemctl restart openstack-nova-compute.service [root@compute ~]# systemctl enable neutron-linuxbridge-agent.service systemctl start neutron-linuxbridge-agent.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. [root@compute ~]# systemctl start neutron-linuxbridge-agent.service [root@compute ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 {/collapse-item} {/collapse}
2022年07月14日
223 阅读
0 评论
0 点赞
1
2
...
4