1+X 云计算初级( 操作题 )

1+X 云计算初级( 操作题 )

Acha
2021-12-04 / 0 评论 / 265 阅读 / 正在检测是否收录...
温馨提示:
本文最后更新于2021年12月04日,已超过1118天没有更新,若内容或图片失效,请留言反馈。

1+X 初级

by 王政(blog http://blog.youto.club)

# 说明: 本次视频,只做文档演示,没有详细讲解
# 视频地址:https://www.bilibili.com/video/BV1hL411M72w

环境准备

yum源
selinux & firewalld
hostname
hosts

网络服务

本地YUM源管理

使用VMWare软件启动提供的xserver1虚拟机(配置虚拟机xserver1的IP为192.168.100.11,主机名为xserver1),在虚拟机的/root目录下,存在一个CentOS-7-x86_64-DVD-1511.iso的镜像文件,使用这个镜像文件配置本地yum源,要求将这个镜像文件挂在/opt/centos目录,请问如何配置自己的local.repo文件,使得可以使用该镜像中的软件包,安装软件。请将local.repo文件的内容以文本形式提交到答题框。
[root@xserver1 ~]# mkdir /opt/centos
[root@xserver1 ~]# mount -o loop CentOS-7-x86_64-DVD-1511.iso /opt/centos/
mount: /dev/loop0 is write-protected, mounting read-only
[root@xserver1 ~]# rm -f /etc/yum.repos.d/*
[root@xserver1 ~]# cat /etc/yum.repos.d/local.repo
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
# 答题框
[root@xserver1 ~]# cat /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1

FTP安装使用

使用xserver1虚拟机,安装ftp服务,并配置ftp的共享目录为/opt。使用VMWare软件继续启动提供的xserver2虚拟机(配置虚拟机xserver2的IP为192.168.100.12,主机名为xserver2),并创建该虚拟机的yum源文件ftp.repo使用xserver1的ftp源(配置文件中的FTP地址使用主机名)。配置完成后,将xserver2节点的ftp.repo文件以文本形式提交到答题框。
yum install -y vsftpd 
echo "anon_root=/opt/" >> /etc/vsftpd/vsftpd.conf
systemctl start vsftpd && systemctl enable vsftpd

[root@xserver2 ~]# cat /etc/yum.repos.d/ftp.repo
[centos]
name=centos
baseurl=ftp://xserver1/centos
gpgcheck=0
enabled=1
# 答题框
[root@xserver2 ~]# cat /etc/yum.repos.d/ftp.repo
[centos]
name=centos
baseurl=ftp://xserver1/centos
gpgcheck=0
enabled=1

Samba管理

使用xserver1虚拟机,安装Samba服务所需要的软件包,将xserver1节点中的/opt/share目录使用Samba服务共享出来(目录不存在请自行创建)。操作完毕后,将xserver1节点Samba配置文件中的[share]段落和执行netstat -ntpl命令的返回结果以文本形式提交到答题框。
yum install -y samba

vi /etc/samba/smb.conf
# 添加
[share]
comment = Share Directories
path=/opt/share
writable = yes

systemctl restart smb
# 答题框
[share]
comment = Share Directories
path=/opt/share
writable = yes

[root@xserver1 ~]# netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:139             0.0.0.0:*               LISTEN      3527/smbd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1429/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1944/master         
tcp        0      0 0.0.0.0:445             0.0.0.0:*               LISTEN      3527/smbd           
tcp6       0      0 :::139                  :::*                    LISTEN      3527/smbd           
tcp6       0      0 :::21                   :::*                    LISTEN      3280/vsftpd         
tcp6       0      0 :::22                   :::*                    LISTEN      1429/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1944/master         
tcp6       0      0 :::445                  :::*                    LISTEN      3527/smbd 

NFS 服务管理

使用 xserver1、xserver2 虚拟机,安装 NFS 服务所需要的软件包,将 xserver1 节点中的/mnt/share 目录使用 NFS 服务共享出来(目录不存在请自行创建,要求可访问共享目录的网段为 192.168.100.0/24),接着在 xserver2 节点上,将 xserver1中共享的文件挂载到/mnt 目录下。操作完毕后,依次将 xserver1 节点上执行showmount -e ip 命令和 xserver2 节点上执行 df -h 命令的返回结果(简要)写到下方
# xserver1
yum search nfs
yum install -y nfs-utils
mkdir /mnt/share
man exports
vi /etc/exports
 # 添加
 /mnt/share 192.168.100.0/24(rw,sync)
systemctl start nfs-server rpcbind
exportfs -r
showmount -e 192.168.100.11

# xserver2
yum install -y nfs-utils
mount 192.168.100.11:/mnt/share /mnt/
df -h
# 答题框
[root@xserver1 ~]# showmount -e 192.168.100.11
Export list for 192.168.100.11:
/mnt/share 192.168.100.0/24

[root@xserver2 ~]# df -h
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/centos-root     36G  951M   35G   3% /
devtmpfs                   1.9G     0  1.9G   0% /dev
tmpfs                      1.9G     0  1.9G   0% /dev/shm
tmpfs                      1.9G  8.7M  1.9G   1% /run
tmpfs                      1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1                  497M  114M  384M  23% /boot
tmpfs                      378M     0  378M   0% /run/user/0
192.168.100.11:/mnt/share   36G  7.7G   28G  22% /mnt

主从数据库管理

在xserver1、xserver2上安装mariadb数据库,并配置为主从数据库(xserver1为主节点、xserver2为从节点),实现两个数据库的主从同步。配置完毕后,请在xserver2上的数据库中执行“show slave status \G”命令查询从节点复制状态,将查询到的结果以文本形式提交到答题框。
## xserver1
 yum install -y mariadb-server
 systemctl start mariadb
 mysql_secure_installation 

 mysql -uroot -p000000
 MariaDB [(none)]> GRANT ALL  privileges ON *.* TO 'root'@'%' IDENTIFIED BY '000000';
 MariaDB [(none)]> GRANT replication slave ON *.* TO 'user'@'xserver2' IDENTIFIED BY '000000'; 

 vi /etc/my.cnf
 # 在第二行添加
   log_bin=mysql-bin
   server_id=11
 systemctl restart mariadb


## xserver2
 yum install -y mariadb-server
 systemctl start mariadb
 mysql_secure_installation 
 vi /etc/my.cnf
 # 在第二行添加
   log_bin=mysql-bin
   server_id=12

 systemctl restart mariadb

 mysql -uroot -p000000
 MariaDB [(none)]> CHANGE MASTER TO  MASTER_HOST='xserver1',MASTER_USER='user',MASTER_PASSWORD='000000';
 MariaDB [(none)]> start slave;
 MariaDB [(none)]> show slave status \G
# 答题框
MariaDB [(none)]> show slave status \G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: xserver1
                  Master_User: user
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 245
               Relay_Log_File: mariadb-relay-bin.000003
                Relay_Log_Pos: 529
        Relay_Master_Log_File: mysql-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 245
              Relay_Log_Space: 825
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 11
1 row in set (0.00 sec)

部署WordPress应用

使用xserver1节点,基于lnmp环境,部署WordPress应用(WordPress源码包在/root目录下)。应用部署完毕后,设置WordPress的站点标题为自己的姓名(例:名字叫张三,则设置站点标题为张三的BLOG),设置完毕后登录WordPresss首页。最后将命令curl ip(ip为wordpress的首页ip)的返回结果以文本形式提交到答题框。
# xserver1
  vi /etc/yum.repos.d/local.repo
i 
  # 添加
     [lnmp]
     name=lnmp
     baseurl=file:///root/lnmp
     gpgcheck=0
     enabled=1

  yum repolist

  yum install -y nginx php php-fpm php-mysql

  vi /etc/php-fpm.d/www.conf 
  # 修改
    39  user = nginx
    41  group = nginx

  vi /etc/nginx/conf.d/default.conf 
  # 修改
    10          index index.php index.html index.htm;
    30      location ~ \.php$ {
    31          root           /usr/share/nginx/html;
    32          fastcgi_pass   127.0.0.1:9000;
    33          fastcgi_index  index.php;
    34          fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
    35          include        fastcgi_params;
    36      }
  systemctl start nginx php-fpm

  [root@xserver1 ~]# mysql -uroot -p000000
  MariaDB [(none)]> create database wordpress;

  rm  -f /usr/share/nginx/html/*
  yum install -y unzip
  unzip wordpress-4.7.3-zh_CN.zip 
  cp -r wordpress/* /usr/share/nginx/html/
  ls /usr/share/nginx/html/
  chown -R nginx.nginx /usr/share/nginx/html/
  cp /usr/share/nginx/html/wp-config-sample.php /usr/share/nginx/html/wp-config.php
  vi /usr/share/nginx/html/wp-config.php
  # 修改
    23  define('DB_NAME', 'wordpress');
    26  define('DB_USER', 'root');
    29  define('DB_PASSWORD', '000000');

  # 使用浏览访问 IP,设置标题

数字表示行号

tip: 在 vi中 :set nu 显示行号

拓展

LVM

# 解题步骤:
 1. 添加硬盘,并分区
 2. 创建 PV
 3. 创建 VG
 4. 创建 LV
 5. 格式化,挂载
使用xserver1虚拟机,使用VMWare软件自行添加一块大小为20G的硬盘,使用fdisk命令对该硬盘进形分区,要求分出三个大小为5G的分区。使用这三个分区,创建名xcloudvg的卷组。然后创建名xcloudlv的逻辑卷,大小为12G,再创建完之后,使用命令将逻辑卷大小缩小2G,然后查看逻辑卷信息。将上述所有操作命令和返回结果以文本形式提交到答题框。
[root@xserver2 ~]# lsblk 
[root@xserver2 ~]# fdisk /dev/sdb
# n 默认回车  大小设置:+5G 重复三次
[root@xserver2 ~]# pvcreate /dev/sdb[1-3]
[root@xserver2 ~]# vgcreate xcloudvg /dev/sdb[1-3]
[root@xserver2 ~]# lvcreate -L 12G -n xcloudlv xcloudvg
[root@xserver2 ~]# lvreduce -L -2G /dev/mapper/xcloudvg-xcloudlv
[root@xserver2 ~]# lvdisplay
使用VMware软件和提供的CentOS-7-x86_64-DVD-1511.iso创建虚拟机,自行配置好网络并多添加一块大小为20G的硬盘,使用fdisk命令对该硬盘进形分区,要求分出三个大小为5G的分区。使用这三个分区,创建名xcloudvg的卷组。然后创建名xcloudlv的逻辑卷,大小为12G,最后用xfs文件系统对逻辑卷进行格式化并挂载到/mnt目录下。将上述所有操作命令和返回结果以文本形式提交到答题框。
[root@xserver2 ~]# lsblk 
[root@xserver2 ~]# fdisk /dev/sdb
# n 默认回车  大小设置:+5G 重复三次
[root@xserver2 ~]# pvcreate /dev/sdb1 /dev/sdb2 /dev/sdb3
[root@xserver2 ~]# vgcreate xcloudvg /dev/sdb[1-3]
[root@xserver2 ~]# lvcreate -L 12G -n xcloudlv xcloudvg
[root@xserver2 ~]# mkfs.xfs /dev/xcloudvg/xcloudlv
[root@xserver2 ~]# mkdir /data
[root@xserver2 ~]# mount -o loop /dev/xcloudvg/xcloudlv /mnt
[root@xserver2 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   36G  872M   35G   3% /
devtmpfs                 1.9G     0  1.9G   0% /dev
tmpfs                    1.9G     0  1.9G   0% /dev/shm
tmpfs                    1.9G  8.6M  1.9G   1% /run
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1                497M  114M  384M  23% /boot
tmpfs                    378M     0  378M   0% /run/user/0
/dev/loop0                12G   33M   12G   1% /mnt
使用xserver1虚拟机,使用VMWare软件自行添加一块大小为20G的硬盘,使用fdisk命令对该硬盘进形分区,要求分出两个大小为5G的分区。使用两个分区,创建名xcloudvg的卷组并指定PE大小为16 MB。将执行vgdisplay命令的返回结果以文本形式提交到答题框。
[root@xserver1 ~]# fdisk /dev/sdb
# n 默认回车  大小设置:+5G 重复三次
[root@xserver1 ~]# pvcreate /dev/sdb1 /dev/sdb2
[root@xserver1 ~]# vgcreate xcloudvg -s 16 /dev/sdb1 /dev/sdb2
[root@xserver1 ~]# vgdisplay 

数据库管理

使用提供的“all-in-one”虚拟机,进入数据库。 (1)创建本地用户examuser,密码为000000; (2)查询mysql数据库中的user表的host,user,password字段;(3)赋予这个用户对所有数据库拥有“查询”“删除”“更新”“创建”的本地权限。依次将操作命令和返回结果以文本形式提交到答题框。
[root@xserver1 ~]# mysql -uroot -p000000
MariaDB [mysql]> create user examuser@localhost identified by '000000';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> use mysql
MariaDB [mysql]> select host,user,password from user;
+-----------+----------+-------------------------------------------+
| host      | user     | password                                  |
+-----------+----------+-------------------------------------------+
| localhost | root     | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| xserver1  | root     | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| 127.0.0.1 | root     | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| ::1       | root     | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| %         | root     | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| xserver2  | user     | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| localhost | examuser | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
+-----------+----------+-------------------------------------------+
7 rows in set (0.00 sec)

MariaDB [mysql]> grant SELECT,DELETE,UPDATE,CREATE on *.* to examuser@localhost;
Query OK, 0 rows affected (0.00 sec)

MariaDB管理

使用VMware软件和提供的CentOS-7-x86_64-DVD-1511.iso创建虚拟机,自行配置好网络和YUM源,安装mariadb数据库,安装完毕后登数据库,查询当前系统的时间和用户。依次将操作命令和返回结果以文本形式提交到答题框。(数据库用户名root,密码000000;关于数据库的命令均使用小写)
[root@xserver1 ~]# mysql -uroot -p000000
MariaDB [(none)]> select user() ,now();
+----------------+---------------------+
| user()         | now()               |
+----------------+---------------------+
| root@localhost | 2021-11-17 15:05:08 |
+----------------+---------------------+
1 row in set (0.00 sec)

Openstack运维

OpenStack Keystone管理

使用提供的“all-in-one”虚拟机,创建用户testuser,密码为xiandian,将testuser用户分配给admin项目,赋予用户admin的权限。依次将操作命令和查询结果以文本形式提交到答题框。
[root@controller ~]# openstack user create --password xiandian --project admin --domain xiandian testuser
+--------------------+----------------------------------+
| Field              | Value                            |
+--------------------+----------------------------------+
| default_project_id | f9ff39ba9daa4e5a8fee1fc50e2d2b34 |
| domain_id          | 9321f21a94ef4f85993e92a228892418 |
| enabled            | True                             |
| id                 | 806b2c56147e41699ddd89af955a6f20 |
| name               | testuser                         |
+--------------------+----------------------------------+

OpenStack Glance管理

使用VMWare软件启动提供的opensatckallinone镜像,自行检查openstack中各服务的状态,若有问题自行排查。在xserver1节点的/root目录下存在一个cirros-0.3.4-x86_64-disk.img镜像;使用glance命令将镜像上传,并命名为mycirros,最后将glance image-show id命令的返回结果以文本形式提交到答题框。
[root@controller ~]# scp 192.168.100.11:/root/cirros-0.3.4-x86_64-disk.img .
[root@controller ~]#  glance image-create --name mycirros --disk-format qcow2 --container-format bare --progress < cirros-0.3.4-x86_64-disk.img 
[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |
| container_format | bare                                 |
| created_at       | 2021-11-24T15:54:45Z                 |
| disk_format      | qcow2                                |
| id               | 45d98d72-b23f-464d-b66e-13f154eed216 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | mycirros                             |
| owner            | f9ff39ba9daa4e5a8fee1fc50e2d2b34     |
| protected        | False                                |
| size             | 13287936                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2021-11-24T15:54:45Z                 |
| virtual_size     | None                                 |
| visibility       | private                              |
+------------------+--------------------------------------+
[root@controller ~]# glance image-show 45d98d72-b23f-464d-b66e-13f154eed216
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |
| container_format | bare                                 |
| created_at       | 2021-11-24T15:54:45Z                 |
| disk_format      | qcow2                                |
| id               | 45d98d72-b23f-464d-b66e-13f154eed216 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | mycirros                             |
| owner            | f9ff39ba9daa4e5a8fee1fc50e2d2b34     |
| protected        | False                                |
| size             | 13287936                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2021-11-24T15:54:45Z                 |
| virtual_size     | None                                 |
| visibility       | private                              |
+------------------+--------------------------------------+

OpenStack Cinder管理

使用VMWare软件启动提供的opensatckallinone镜像,自行检查openstack中各服务的状态,若有问题自行排查。使用Cinder服务,创建名为“ lvm”的卷类型,然后创建一块带“lvm” 标识的云硬盘,名称为 BlockVloume,大小为 2G,查询该云硬盘详细信息。完成后,将cinder show BlockVloume命令的返回结果以文本形式提交到答题框。
[root@controller ~]# cinder type-create lvm
+--------------------------------------+------+-------------+-----------+
|                  ID                  | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| caad5c75-59e3-4167-ad15-19e9f27cee6f | lvm  |      -      |    True   |
+--------------------------------------+------+-------------+-----------+
[root@controller ~]# cinder create --name BlockVloume --volume-type lvm 2
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|      consistencygroup_id       |                 None                 |
|           created_at           |      2021-11-24T15:57:22.000000      |
|          description           |                 None                 |
|           encrypted            |                False                 |
|               id               | d013e4b2-b0b8-41b4-95e3-2bd47f3431fa |
|            metadata            |                  {}                  |
|        migration_status        |                 None                 |
|          multiattach           |                False                 |
|              name              |             BlockVloume              |
|     os-vol-host-attr:host      |                 None                 |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   f9ff39ba9daa4e5a8fee1fc50e2d2b34   |
|       replication_status       |               disabled               |
|              size              |                  2                   |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |               creating               |
|           updated_at           |                 None                 |
|            user_id             |   0befa70f767848e39df8224107b71858   |
|          volume_type           |                 lvm                  |
+--------------------------------+--------------------------------------+
[root@controller ~]# cinder show d013e4b2-b0b8-41b4-95e3-2bd47f3431fa
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|      consistencygroup_id       |                 None                 |
|           created_at           |      2021-11-24T15:57:22.000000      |
|          description           |                 None                 |
|           encrypted            |                False                 |
|               id               | d013e4b2-b0b8-41b4-95e3-2bd47f3431fa |
|            metadata            |                  {}                  |
|        migration_status        |                 None                 |
|          multiattach           |                False                 |
|              name              |             BlockVloume              |
|     os-vol-host-attr:host      |          controller@lvm#LVM          |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   f9ff39ba9daa4e5a8fee1fc50e2d2b34   |
|       replication_status       |               disabled               |
|              size              |                  2                   |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |              available               |
|           updated_at           |      2021-11-24T15:57:23.000000      |
|            user_id             |   0befa70f767848e39df8224107b71858   |
|          volume_type           |                 lvm                  |
+--------------------------------+--------------------------------------+

OpenStack Nova管理

使用提供的“all-in-one”虚拟机,通过nova的相关命令创建名为exam,ID为1234,内存为1024M,硬盘为20G,虚拟内核数量为2的云主机类型,查看exam的详细信息。依次将操作命令及返回结果以文本形式提交到答题框。
[root@controller ~]# nova flavor-create exam 1234 1024 20 2
+------+------+-----------+------+-----------+------+-------+-------------+-----------+
| ID   | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+------+------+-----------+------+-----------+------+-------+-------------+-----------+
| 1234 | exam | 1024      | 20   | 0         |      | 2     | 1.0         | True      |
+------+------+-----------+------+-----------+------+-------+-------------+-----------+
[root@controller ~]# nova flavor-show exam
+----------------------------+-------+
| Property                   | Value |
+----------------------------+-------+
| OS-FLV-DISABLED:disabled   | False |
| OS-FLV-EXT-DATA:ephemeral  | 0     |
| disk                       | 20    |
| extra_specs                | {}    |
| id                         | 1234  |
| name                       | exam  |
| os-flavor-access:is_public | True  |
| ram                        | 1024  |
| rxtx_factor                | 1.0   |
| swap                       |       |
| vcpus                      | 2     |
+----------------------------+-------+

OpenStack Nova管理

使用VMWare软件启动提供的opensatckallinone镜像,自行检查openstack中各服务的状态,若有问题自行排查。使用nova相关命令,查询nova所有的监控列表,并查看ID为1的监控主机的详细信息,将操作命令和返回结果以文本形式提交到答题框。
[root@controller ~]# nova hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status  |
+----+---------------------+-------+---------+
| 1  | controller          | up    | enabled |
+----+---------------------+-------+---------+
[root@controller ~]# nova hypervisor-show 1
+---------------------------+------------------------------------------+
| Property                  | Value                                    |
+---------------------------+------------------------------------------+
| cpu_info_arch             | x86_64                                   |
| cpu_info_features         | ["smap", "avx", "clflush", "sep",        |
|                           | "syscall", "vme", "invpcid", "tsc",      |
|                           | "fsgsbase", "xsave", "pge", "vmx",       |
|                           | "erms", "cmov", "smep", "pcid", "pat",   |
|                           | "lm", "msr", "adx", "3dnowprefetch",     |
|                           | "nx", "fxsr", "sse4.1", "pae", "sse4.2", |
|                           | "pclmuldq", "fma", "tsc-deadline",       |
|                           | "mmx", "osxsave", "cx8", "mce", "de",    |
|                           | "rdtscp", "ht", "pse", "lahf_lm", "abm", |
|                           | "rdseed", "popcnt", "mca", "pdpe1gb",    |
|                           | "apic", "sse", "f16c", "mpx", "invtsc",  |
|                           | "pni", "aes", "avx2", "sse2", "ss",      |
|                           | "hypervisor", "bmi1", "bmi2", "ssse3",   |
|                           | "fpu", "cx16", "pse36", "mtrr", "movbe", |
|                           | "rdrand", "x2apic"]                      |
| cpu_info_model            | Broadwell-noTSX                          |
| cpu_info_topology_cells   | 1                                        |
| cpu_info_topology_cores   | 2                                        |
| cpu_info_topology_sockets | 1                                        |
| cpu_info_topology_threads | 1                                        |
| cpu_info_vendor           | Intel                                    |
| current_workload          | 0                                        |
| disk_available_least      | 31                                       |
| free_disk_gb              | 34                                       |
| free_ram_mb               | 7296                                     |
| host_ip                   | 192.168.100.10                           |
| hypervisor_hostname       | controller                               |
| hypervisor_type           | QEMU                                     |
| hypervisor_version        | 2003000                                  |
| id                        | 1                                        |
| local_gb                  | 34                                       |
| local_gb_used             | 0                                        |
| memory_mb                 | 7808                                     |
| memory_mb_used            | 512                                      |
| running_vms               | 0                                        |
| service_disabled_reason   | None                                     |
| service_host              | controller                               |
| service_id                | 6                                        |
| state                     | up                                       |
| status                    | enabled                                  |
| vcpus                     | 2                                        |
| vcpus_used                | 0                                        |
+---------------------------+------------------------------------------+

OpenStack Swift管理

使用VMWare软件启动提供的opensatckallinone镜像,自行检查openstack中各服务的状态,若有问题自行排查。使用swift相关命令,创建一个名叫examcontainer的容器,然后往这个容器中上传一个test.txt的文件(文件可以自行创建),上传完毕后,使用命令查看容器,将操作命令和返回结果以文本形式提交到答题框。(40)分
[root@controller ~]# swift post examcontainer
[root@controller ~]# touch test.txt
[root@controller ~]# swift upload examcontainer test.txt
test.txt
[root@controller ~]# swift list examcontainer
[root@controller ~]# swift stat examcontainer

Docker

Docker安装

使用xserver1节点,自行配置YUM源,安装docker服务(需要用到的包为xserver1节点/root目录下的Docker.tar.gz)。安装完服务后,将registry_latest.tar上传到xserver1节点中并配置为私有仓库。要求启动registry容器时,将内部保存文件的目录映射到外部的/opt/registry目录,将内部的5000端口映射到外部5000端口。依次将启动registry容器的命令及返回结果、执行docker info命令的返回结果以文本形式提交到答题框。
[root@xserver1 ~]# tar xfv Docker.tar.gz
[root@xserver1 ~]# vi /etc/yum.repos.d/local.repo
 # 添加
   [docker]
   name=docker
   baseurl=file:///root/Docker
   gpgcheck=0
   enabled=1
[root@xserver1 ~]# yum repolist
[root@xserver1 ~]# yum install -y docker-ce
[root@xserver1 ~]# systemctl daemon-reload
[root@xserver1 ~]# systemctl start docker
[root@xserver1 ~]# ./image.sh 
[root@xserver1 ~]# vi /etc/docker/daemon.json 
# 添加
 {
   "insecure-registries": ["192.168.100.11:5000"]
 }
 [root@xserver1 ~]# systemctl restart docker
[root@xserver1 ~]# docker run -d --restart=always -v /opt/registry/:/var/lib/registry -p 5000:5000 registry:latest
950319053db0f90eef33b33ce3bc76a93d3b6cf4b17c27b532d23e8aa9ef7bb9
[root@xserver1 ~]# docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                    NAMES
950319053db0        registry:latest              "/entrypoint.sh /etc…"   19 seconds ago      Up 18 seconds       0.0.0.0:5000->5000/tcp   xenodochial_johnson
[root@xserver1 ~]# docker info
# 答题框
[root@xserver1 ~]# docker info
Containers: 1
 Running: 0
 Paused: 0
 Stopped: 1
Images: 11
Server Version: 18.09.6
Storage Driver: devicemapper
 Pool Name: docker-253:0-67486317-pool
 Pool Blocksize: 65.54kB
 Base Device Size: 10.74GB
 Backing Filesystem: xfs
 Udev Sync Supported: true
 Data file: /dev/loop1
 Metadata file: /dev/loop2
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 1.339GB
 Data Space Total: 107.4GB
 Data Space Available: 26.1GB
 Metadata Space Used: 2.077MB
 Metadata Space Total: 2.147GB
 Metadata Space Available: 2.145GB
 Thin Pool Minimum Free Space: 10.74GB
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.107-RHEL7 (2015-10-14)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-327.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.688GiB
Name: xserver1
ID: VXSJ:7YFV:GWBV:2E3E:LAIM:EQ36:MRCD:Q6BG:IRBS:HJXU:K4ZD:E377
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.

[root@xserver1 ~]# docker run -d -v /opt/registry:/var/lib/registry -p 5000:5000 registry:latest
b45c91876af54542dbc15706b7cb037e5b716068f7d33bba42df1f874d3d6bbf

Docker运维

使用xserver1节点,上传nginx_latest.tar到xserver1节点中,然后将该镜像打标签,上传至私有仓库。使用xserver2节点,自行安装docker服务,配置xserver2节点使用xserver1的私有仓库,配置完毕后,在xserver2节点拉取nginx_latest.tar镜像。最后将在xserver2上执行docker images命令返回的结果以文本形式提交到答题框。
[root@xserver1 ~]# cd images
[root@xserver1 images]# docker load -i nginx_latest.tar
Loaded image: nginx:latest
[root@xserver1 images]# docker tag nginx:latest 192.168.100.11:5000/nginx:latest
[root@xserver1 images]# docker push 192.168.100.11:5000/nginx:latest
The push refers to repository [192.168.100.11:5000/nginx]
a89b8f05da3a: Pushed 
6eaad811af02: Pushed 
b67d19e65ef6: Pushed 
latest: digest: sha256:f56b43e9913cef097f246d65119df4eda1d61670f7f2ab720831a01f66f6ff9c size: 948

[root@xserver1 ~]# cp -r Docker /opt/
[root@xserver2 ~]# vi /etc/yum.repos.d/ftp.repo
# 添加
[docker]
name=docker
baseurl=ftp://xserver1/Docker
gpgcheck=0
enabled=1
[root@xserver2 ~]# yum repolist
[root@xserver2 ~]# yum install -y docker-ce
[root@xserver2 ~]#systemctl daemon-reload
[root@xserver2 ~]# systemctl start docker
[root@xserver2 ~]# vi /etc/docker/daemon.json 
# 添加
 {
   "insecure-registries": ["192.168.100.11:5000"]
 }
 [root@xserver2 ~]# systemctl restart docker
[root@xserver2 ~]# docker pull 192.168.100.11:5000/nginx:latest
[root@xserver2 ~]# docker images
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
192.168.100.11:5000/nginx   latest              540a289bab6c        2 years ago         126MB

Docker管理

假设当前存在docker镜像tomcat:latest,现在将tomcat镜像导出,导出名称为tomcat_images.tar,放在/media目录下,将以上操作命令填入答题框。
[root@xserver2 ~]# docker tag 192.168.100.11:5000/nginx:latest tomcat:latest
[root@xserver2 ~]# docker save tomcat:latest > /media/tomcat_images.tar

Dockerfile编写

使用xserver1节点,新建httpd目录,然后编写Dockerfile文件,要求如下:1)使用centos:latest镜像作为基础镜像;2)作者为xiandian;3)Dockerfile要求删除镜像的yum源,使用当前系统的local.repo源文件;4)安装http服务;5)暴露80端口。编写完毕后,构建的镜像名字叫httpd:v1.0的镜像。完成后将Dockerfile文件和镜像列表信息以文本形式提交到答题框。
[root@xserver1 ~]# mkdir httpd 
[root@xserver1 ~]# cd httpd/
[root@xserver1 httpd]# vi Dockerfile 
FROM centos:latest
MAINTAINER xiandian
RUN rm -f /etc/yum.repos.d/* 
COPY local.repo /etc/yum.repos.d/
RUN yum install -y httpd
EXPOSE 80
[root@xserver1 httpd]# cat local.repo 
 [centos]
name=centos
baseurl=ftp://192.168.100.11/centos
gpgcheck=0
enabled=1
[root@xserver1 httpd]# docker build -t httpd:v1.0 .
[root@xserver1 httpd]# docker images
REPOSITORY                  TAG                 IMAGE ID            CREATED              SIZE
httpd                       v1                  4ff5ef091914        19 seconds ago       220MB

Dockerfile 编写

使用 xserver1 节点,新建目录 centos-jdk,将提供的 jdk-8u141-linux-x64.tar.gz
复制新建的目录,然后编辑 Dockerfile 文件,文件要求如下:
    1.使用 centos:latest 基础镜像;
    2.指定作者为 xiandian;
    3.新建文件夹/usr/local/java 用于存放 jdk 文件;
    4.将 JDK 文件复制到镜像内创建的目录并自动解压;
    5.创建软连接:ln -s /usr/local/java/jdk1.8.0_141 /usr/local/java/jdk; 
    6.设置环境变量如下
        ENV JAVA_HOME /usr/local/java/jdk ENV JRE_HOME ${JAVA_HOME}/jre
        ENV CLASSPATH .:${JAVA_HOME}/lib:${JRE_HOME}/lib ENV PATH ${JAVA_HOME}/bin:$PATH
编写完毕后,构建名为 centos-jdk 的镜像,构建成功后,查看镜像列表。
最后将 Dockerfile 的内容、构建镜像的操作命令、查看镜像列表的命令和返回的(简要)写到下方。
[root@xserver1 ~]# mkdir centos-jdk
[root@xserver1 ~]# cd centos-jdk/

[root@xserver1 centos-jdk]# cat Dockerfile 
FROM centos:latest
MAINTAINER xiandian
WORKDIR /usr/local/java
ADD jdk-8u141-linux-x64.tar.gz /usr/local/java
RUN ln -s /usr/local/java/jdk1.8.0_141 /usr/local/java/jdk
ENV JAVA_HOME /usr/local/java/jdk 
ENV JRE_HOME ${JAVA_HOME}/jre
ENV CLASSPATH .:${JAVA_HOME}/lib:${JRE_HOME}/lib 
ENV PATH ${JAVA_HOME}/bin:$PATH

[root@xserver1 centos-jdk]# cp /root/jdk/jdk-8u141-linux-x64.tar.gz .

[root@xserver1 centos-jdk]# docker build -t centos-jdk:v1 .
Sending build context to Docker daemon  185.5MB
Step 1/9 : FROM centos:latest
 ---> 0f3e07c0138f
Step 2/9 : MAINTAINER xiandian
 ---> Using cache
 ---> 380ab4775829
Step 3/9 : WORKDIR /usr/local/java
 ---> Running in 3d042489bc79
Removing intermediate container 3d042489bc79
 ---> 0ca117e514c2
Step 4/9 : ADD jdk-8u141-linux-x64.tar.gz /usr/local/java
 ---> a46b588b978d
Step 5/9 : RUN ln -s /usr/local/java/jdk1.8.0_141 /usr/local/java/jdk
 ---> Running in 5a6643a9a5cb
Removing intermediate container 5a6643a9a5cb
 ---> 0bcbb2fbe73f
Step 6/9 : ENV JAVA_HOME /usr/local/java/jdk
 ---> Running in 7522f805f7d4
Removing intermediate container 7522f805f7d4
 ---> 3d366a42f998
Step 7/9 : ENV JRE_HOME ${JAVA_HOME}/jre
 ---> Running in b550575dc17e
Removing intermediate container b550575dc17e
 ---> e0df108c1aed
Step 8/9 : ENV CLASSPATH .:${JAVA_HOME}/lib:${JRE_HOME}/lib
 ---> Running in 1b439c9f4e25
Removing intermediate container 1b439c9f4e25
 ---> bb89ab740e4e
Step 9/9 : ENV PATH ${JAVA_HOME}/bin:$PATH
 ---> Running in d18c59ac844d
Removing intermediate container d18c59ac844d
 ---> 59d4683996a7
Successfully built 59d4683996a7
Successfully tagged centos-jdk:v1

[root@xserver1 centos-jdk]# docker images
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
centos-jdk                  v1                  3ca44b341427        7 seconds ago       596MB

部署Swarm集群

使用xserver1、xserver2节点,自行配置好网络,安装好docker-ce。部署Swarm集群,并安装Portainer图形化管理工具,部署完成后,使用浏览器登录ip:9000界面,进入Swarm控制台。将curl swarm ip:9000返回的结果以文本形式提交到答题框。
[root@xserver1 ~]# docker swarm init --advertise-addr 192.168.100.11
Swarm initialized: current node (z9kmpvwdd1mcytddlr7qxfqz3) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-31hn156p86fr6eo720rq9mqafckpw1who0i39dxv9m3tbpjd0l-86zun6fgtb3b56iguf91luoge 192.168.100.11:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.


[root@xserver2 ~]# docker swarm join --token SWMTKN-1-31hn156p86fr6eo720rq9mqafckpw1who0i39dxv9m3tbpjd0l-86zun6fgtb3b56iguf91luoge 192.168.100.11:2377
This node joined a swarm as a worker.

[root@xserver1 centos-jdk]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
80cisraj17qq1tuawca29a8b1 *   xserver1            Ready               Active              Leader              18.09.6
dsot82834vazufdo3x9y79lgj     xserver2            Ready               Active                                  18.09.6

[root@xserver1 ~]#  docker tag 4cda95efb0e4 portainer/portainer:latest

[root@xserver1 ~]# docker service create \
  --name portainer \
  --publish 9000:9000 \
  --replicas=1 \
  --constraint 'node.role == manager' \
  --mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock \
  --mount type=volume,src=portainer_data,dst=/data \
  portainer/portainer -H unix:///var/run/docker.sock

# 访问浏览器 访问IP:9000,设置密码
0

评论

博主关闭了当前页面的评论