admin 管理员组

文章数量: 887021

从官网获取学习帮助:
https://docs.openstack/mitaka/zh_CN/install-guide-rdo/horizon.html

Dashboard 图形界面

安装软件包:

[root@controller ~]#  yum install openstack-dashboard -y

编辑文件 /etc/openstack-dashboard/local_settings 并完成如下动作:

[root@controller ~]# yum install openstack-dashboard -y

[root@controller ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST	/使用 OpenStack 服务
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin"

ALLOWED_HOSTS = ['*', ]		/允许所有主机访问仪表板:

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {			/配置 memcached 会话存储服务
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True		启用对域的支持

OPENSTACK_API_VERSIONS = {
    "identity": 3,			/配置API版本:
    "image": 2,
    "volume": 2,
}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'			/通过仪表盘创建用户时的默认域配置为 default :

OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': False,
    'enable_quotas': False,
    'enable_ipv6': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,			/我们只作了扁平化网络,还没有配置vxlan,先关闭这些公能
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
    
TIME_ZONE = "Asia/Shanghai"		/配置时区

[root@controller ~]# systemctl restart httpd.service memcached.service

然后我们就可以通过web界面访问了:

可以使用创建的admin 和 demo 令各用户登陆。


可以看到我们配置的所有信息。

用普通用户登陆:

功能就比admin用户少了太多。


启动云主机,打开控制台,点击值显示控制台才可以进行操作。


可以看到我们访问的访问规则。ICMP ,TCP。

图形化界面创建云主机

先删除目前的主机:

切换到admin用户,删除网络,先删除子网,在删除网络:

创建云主机之前要先创建网络:



切换到demo用户去创建云主机:

  1. 实例信息

    2.镜像

    3.创建规格

    4.网络

    5.安全组

    6.密钥

    就可以启动实例了:


查看网络拓扑:

配置私有网络

https://docs.openstack/mitaka/zh_CN/install-guide-rdo/neutron-controller-install-option2.html

控制结点

相比于共有网络的配置,我们需要改变的有:

[root@controller ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router		/启用Modular Layer 2 (ML2)插件,路由服务和重叠的IP地址
allow_overlapping_ips = True

配置 Modular Layer 2 (ML2) 插件:


[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]			/置公共虚拟网络为flat网络
flat_networks = provider

[ml2_type_vxlan]		/为私有网络配置VXLAN网络识别的网络范围
vni_ranges = 1:1000

[securitygroup]
enable_ipset = True		/启用 ipset 增加安全组规则的高效性

配置Linuxbridge代理:

[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[vxlan]		/启用VXLAN覆盖网络,配置覆盖网络的物理网络接口的IP地址,启用layer-2 population
enable_vxlan = True
local_ip = 172.25.254.1
l2_population = True

配置layer-3代理

[root@controller ~]# vim /etc/neutron/l3_agent.ini
[DEFAULT]		/配置Linuxbridge接口驱动和外部网络网桥
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =

配置DHCP代理:

[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]	/之前配置过,不用动
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
[root@controller ~]# systemctl restart neutron-server.service neutron-linuxbridge-agent.service 
[root@controller ~]# systemctl enable  --now neutron-l3-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-l3-agent.service to /usr/lib/systemd/system/neutron-l3-agent.service.
[root@controller ~]# neutron agent-list
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 05dadf42-2c97-41ab-b33c-727919d7e7f7 | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent    |
| 753f0cca-82ee-44f5-a77e-856924003134 | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-agent |
| 96ff50a4-b1c4-40f1-9fbc-a33f3803e0f2 | L3 agent           | controller | nova              | :-)   | True           | neutron-l3-agent          |
| d0837e5a-0351-408c-9bb8-1b808beb0e73 | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent        |
| d2a9b415-d227-4815-b449-a43e1d39b9fe | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+

计算结点

配置Linuxbridge代理

[root@compute1 ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[vxlan]
enable_vxlan = True
local_ip = 172.25.254.2
l2_population = True

[root@compute1 ~]# systemctl restart neutron-linuxbridge-agent.service 
[root@controller ~]# vim  /etc/openstack-dashboard/local_settings
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    'enable_quotas': True,
    'enable_ipv6': True,
    'enable_distributed_router': True,
    'enable_ha_router': True,
    'enable_lb': True,			/重新打开这些功能
    'enable_firewall': True,
    'enable_vpn': True,
    'enable_fip_topology_check': True,

[root@controller ~]# systemctl restart httpd.service  memcached.service

我们再次使用普通用户登陆的时候,就多了一个路由的选项;

创建私有网络


由于是私有网络,网络段可以随意定义。

可以定义DNS。

看出私有网络和public网络一样的,所以我们切换到admin用户:

将public设置为外部网络,在切会demo用户:

他们两个不是一个段的,如果想让他们互通的话就需要新建一个路由:



路由器的一端已经连接到了publlic 网络。
增加一个接口:

再看网络拓扑图:

非常简单。

我们在创建一个云主机:

网络使用私有网络。其他不变

现在我们就又两个云主机了,一个外部,一个私有的。
那么他们能实现互通吗?

当前vm2这台私有的云主机是可以连接外网的,并可以ping通 我们的外部的云主机。

但是这台外部的互联网主机是不能ping通内部的云主机的。
所以我们要在这台私有云主机上绑定一个浮动IP:(生产环境中是收费的)


点击关联:

用外部主机进行连接,解可以进行通信了

它的实际IP地址为10.0.0.3。

封装镜像

我们选择封装一个centos系统的镜像。

最小化安装:


使用最小化安装。


注意分区的时候我们也可以只分一个 / 分区,便于拉伸,选用标准类型,ext4的文件系统。
安装完成后重启:

启动的时候禁用掉selinux,因为会创建安全上下文。

启动后在配置文件在进行设置。
关闭火墙,然后我们在配置一个网络,用于安装包:



添加一个IP.
添加一个yum源:

我们还需要安装ACPI(表示b高级配置和电源管理接口),让我们可以通过acpid服务重新启动或关闭实例,就好比web界面的关闭云主机就是调用的主机的 acpid 的api 接口。

# yum install acpid
# systemctl enable acpid

配置获取元数据:

实例必须与元数据服务交互才能在启动时执行多个任务。例如,实例必须获得ssh公钥并运行用户数据脚本。
所以我们:安装一个云-init RPM:

[root@localhost yum.repos.d]# vi dvd.repo
[dvd]
name=rhel7.6
baseurl=http://172.25.254.67/rhel7.6
gpgcheck=0

[cloud]
name=cloud-init			/添加一个yum源
baseurl=ftp://172.25.254.67/pub/openstack/cloud-init
gpgcheck=0
~               
[root@localhost yum.repos.d]# yum install cloud-init -y
[root@localhost yum.repos.d]# vi /etc/cloud/cloud.cfg
		/它的配置文件,可以手动配置,我们不动

磁盘拉伸,让用户可以用 resize 进行拉伸:

[root@localhost yum.repos.d]#  yum install cloud-utils-growpart -y

当前我们在控制台是看不到日志的,所以我们需要:

[root@localhost yum.repos.d]# cd /boot/grub2/
[root@localhost grub2]# vi grub.cfg 
        linux16 /boot/vmlinuz-3.10.0-957.el7.x86_64 root=UUID=d239d64e-556c-4a6d-975a-6658b30f24b9 ro console=tty0 console=ttyS0,115200n8"

删除rhgb quite,替换为上面的console=tty0 console=ttyS0,115200n8。

禁用零配置网络;

[root@localhost grub2]# cat /etc/sysconfig/network
# Created by anaconda
NOZEROCONF=yes
[root@localhost grub2]# poweroff

关掉之后它只有在云主机的环境下才可以初始化成功。

清除(删除MAC地址细节)
在实例过程中,操作系统在诸如/etc/sysconfig/network-scripts/ifcfg-eth0这样的位置记录虚拟以太网卡的MAC地址。但是,每次启动映像时,虚拟以太网卡的MAC地址都不同,因此必须从配置文件中删除这些信息。有一个名为virt-sysprep的实用程序,它执行各种清理任务,比如删除MAC地址引用。它将清理虚拟机映像:

[root@rhel7host ~]# virt-sysprep -d guest
[   0.0] Examining the guest ...
[  22.8] Performing "abrt-data" ...
[  22.8] Performing "backup-files" ...
...
[  23.7] Setting a random seed
[  23.7] Setting the machine ID in /etc/machine-id
[  24.0] Performing "lvm-uuids" ...
[root@rhel7host images]# du -sh guest.qcow2 
11G	guest.qcow2		现在有11G,我们压缩一下。
[root@rhel7host images]# virt-sparsify --compress guest.qcow2 /var/www/html/guest.qcow2
[root@rhel7host images]# virt-sparsify --compress guest.qcow2 /var/www/html/guest.qcow2
[   0.1] Create overlay file in /tmp to protect source disk
[   0.2] Examine source disk
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
[  16.4] Fill free space in /dev/sda1 with zero
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
[ 126.5] Copy to destination and make sparse
[ 332.4] Sparsify operation completed with no errors.
virt-sparsify: Before deleting the old disk, carefully check that the 
target disk boots and works correctly.
[root@rhel7host images]# cd  /var/www/html/

[root@rhel7host html]# du -sh guest.qcow2 
519M	guest.qcow2		/就只有519M 了

镜像上传到openstack上

使用管理员:

点击创建镜像:

[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 6a9f52fd-0897-4ad6-ba54-176593191469 | guest  | active |
| 5b9aae7b-761b-4551-8a5d-e569b63fc070 | cirros | active |
+--------------------------------------+--------+--------+

然后我们创建相应的云主机类型:

设置跟磁盘大小为15G,可以测试磁盘拉伸。
创建云主机。

可能会出现起不起来的情况,是因为资源不够了,删除之前的vm1 vm2虚拟机。

正在运行。

日志过来了。

获得了内网的一个IP。如果想要从外部访问的话,我们就需要绑定一个浮动IP。

从外部访问:

[root@controller ~]# ssh centos@172.25.254.103
[centos@vm1 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:a1:ed:5b brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.4/24 brd 10.0.0.255 scope global noprefixroute dynamic eth0
       valid_lft 83791sec preferred_lft 83791sec		/真实地址是10.0.0.4
    inet6 fe80::f816:3eff:fea1:ed5b/64 scope link 
       valid_lft forever preferred_lft forever

[centos@vm1 ~]$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        15G  1.2G   13G   9%   /			/拉伸到了15G

块存储服务cinder

当本地磁盘不够的时候,我们就可以用块设备服务这个接口访问后端的存储。

块存储服务(cinder)为实例提供块存储。存储的分配和消耗是由块存储驱动器,或者多后端配置的驱动器决定的。还有很多驱动程序可用:NAS/SAN,NFS,ISCSI,Ceph等。

典型情况下,块服务API和调度器服务运行在控制节点上。取决于使用的驱动,卷服务器可以运行在控制节点、计算节点或单独的存储节点。

块存储服务通常包含下列组件:

cinder-api

  • 接受API请求,并将其路由到cinder-volume执行。

cinder-volume

  • 与块存储服务和例如cinder-scheduler的进程进行直接交互。它也可以与这些进程通过一个消息队列进行交互。cinder-volume服务响应送到块存储服务的读写请求来维持状态。它也可以和多种存储提供者在驱动架构下进行交互。

cinder-scheduler守护进程

  • 选择最优存储提供节点来创建卷。其与nova-scheduler组件类似。

cinder-backup守护进程

  • cinder-backup服务提供任何种类备份卷到一个备份存储提供者。就像cinder-volume服务,它与多种存储提供者在驱动架构下进行交互。

消息队列

  • 在块存储的进程之间路由信息。

安装并配置控制节点

[root@controller ~]# mysql -u root -p
Enter password: 

MariaDB [(none)]> CREATE DATABASE cinder;		/创建数据库
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    ->   IDENTIFIED BY 'cinder';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'    IDENTIFIED BY 'cinder';
Query OK, 0 rows affected (0.00 sec)			/给与cinder用户权限

MariaDB [(none)]> Ctrl-C -- exit!
Aborted
[root@controller ~]# source admin-openrc 
[root@controller ~]# openstack user create --domain default --password cinder cinder	/创建cidner用户

[root@controller ~]# openstack role add --project service --user cinder admin	/给cinder用户添加admin角色
[root@controller ~]# openstack service create --name cinder \
>   --description "OpenStack Block Storage" volume		/创建cinder cinder2服务实体
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 80aa9ccf11e04648aa84ded3b59e236a |
| name        | cinder                           |
| type        | volume                           |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name cinderv2 \
>   --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 1aa8657f3d974a249e0de3b73412f9bf |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+

创建 cinder 块设备存储服务的 API 入口点:
[root@controller ~]# openstack endpoint create --region RegionOne \
>   volume public http://controller:8776/v1/%\(tenant_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne \
>   volume internal http://controller:8776/v1/%\(tenant_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne \
>   volume admin http://controller:8776/v1/%\(tenant_id\)s


创建 cinder2 块设备存储服务的 API 入口点:
[root@controller ~]# openstack endpoint create --region RegionOne \
>   volumev2 public http://controller:8776/v2/%\(tenant_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne \
>   volumev2 internal http://controller:8776/v2/%\(tenant_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne \
>   volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

安全并配置组件

[DEFAULT]
rpc_backend = rabbit		/配置消息队列
auth_strategy = keystone		/配置认证
my_ip = 172.25.254.1		/指定控制结点IP

[database]
connection = mysql+pymysql://cinder:cinder@controller/cinder		/配置数据库访问

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp		/配置锁路径

[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder		/初始化块设备服务的数据库

配置计算节点以使用块设备存储

[root@controller ~]# vim  /etc/nova/nova.conf 
[cinder]
os_region_name = RegionOne

[root@controller ~]# systemctl restart openstack-nova-api.service
[root@controller ~]# systemctl enable --now openstack-cinder-api.service openstack-cinder-scheduler.service

安装并配置一个存储节点

开启一台虚拟机作为存储结点.

[root@block1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.67 	rhel7host
172.25.254.1    controller
172.25.254.2    compute1
172.25.254.3    block1

[root@block1 ~]# yum install chrony -y
[root@block1 ~]# vim /etc/chrony.conf 
server 172.25.254.67 iburst

[root@compute1 ~]# scp /etc/yum.repos.d/openstack.repo  172.25.254.3:/etc/yum.repos.d/
[root@block1 ~]# yum install lvm2

[root@block1 ~]# systemctl enable --now lvm2-lvmetad.service

然后添加一块20G的虚拟磁盘

[root@block1 ~]# fdisk -l

Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0003fd8b

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    41943039    19921920   8e  Linux LVM

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes				/新加的磁盘
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


[root@block1 ~]# pvcreate /dev/sdb		/创建物理卷
  Physical volume "/dev/sdb" successfully created.
[root@block1 ~]# vgcreate cinder-volumes /dev/sdb			/创建物理卷组
  Volume group "cinder-volumes" successfully created
[root@block1 ~]# vim /etc/lvm/lvm.conf
        filter = [ "a/sda/", "a/sdb/", "r/.*/"]			/只允许这两个设备使用lvm
        
[root@block1 ~]# vim /etc/cinder/cinder.conf 

[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 172.25.254.3
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:cinder@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi		/通过iscsi 进行输出
iscsi_helper = lioadm

[root@block1 ~]#  systemctl enable --now openstack-cinder-volume.service target.service

验证:

[root@controller ~]# cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  | 2020-07-21T03:04:14.000000 |        -        |
|  cinder-volume   | block1@lvm | nova | enabled |   up  | 2020-07-21T03:04:09.000000 |        -        |
+------------------+------------+------+---------+-------+----------------------------+-----------------+


然后web界面就会多一个卷的选项:


[root@controller ~]# ssh centos@172.25.254.103
Last login: Mon Jul 20 18:49:46 2020 from 172.25.254.1
[centos@vm1 ~]$ sudo fdisk -l

Disk /dev/vda: 16.1 GB, 16106127360 bytes, 31457280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a810c

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    31457246    15727599+  83  Linux

Disk /dev/vdb: 5368 MB, 5368709120 bytes, 10485760 sectors			/有5G,是从云硬盘中取得的
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[centos@vm1 ~]$ sudo mkfs.ext4 /dev/vdb		/格式化一下
[centos@vm1 ~]$ sudo mkdir /data
01
[centos@vm1 ~]$ sudo mount /dev/vdb /data
[centos@vm1 ~]$ df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/vda1       15349684 1175484  13424068   9% /
devtmpfs          238836       0    238836   0% /dev
tmpfs             249460       0    249460   0% /dev/shm
tmpfs             249460    8648    240812   4% /run
tmpfs             249460       0    249460   0% /sys/fs/cgroup
tmpfs              49896       0     49896   0% /run/user/1000
/dev/vdb         5029504   20472   4730504   1% /data

[centos@vm1 ~]$ su -
Password: 
Last login: Tue Jul 21 11:22:26 CST 2020 on pts/0

拉伸:

先分离云硬盘;

在扩展云硬盘;

在把硬盘连接到vm1 主机。

[root@vm1 ~]# fdisk -l

Disk /dev/vda: 16.1 GB, 16106127360 bytes, 31457280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a810c

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    31457246    15727599+  83  Linux

Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors		/扩展到了10G
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@vm1 ~]# mount /dev/vdb /data
[root@vm1 ~]# df -h /data
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb        4.8G   22M  4.6G   1% /data			/挂载后却仍然是5G

[root@vm1 ~]# resize2fs /dev/vdb		/ext文件系统拉伸方法
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/vdb is mounted on /data; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/vdb is now 2621440 blocks long.


[root@vm1 data]# df -h /data/
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb        9.8G   24M  9.3G   1% /data		/变成10G 了

[root@vm1 data]# ls		/用户的数据是不变的
adjtime      crypttab                 environment         GREP_COLORS  hosts        krb5.conf      logrotate.conf            mtab           profile           rwtab          subgid              vconsole.conf
aliases      csh.cshrc                ethertypes          group        hosts.allow  ld.so.cache    lost+found                myf         protocols         securetty      subuid              virc
aliases.db   csh.login                exports             group-       hosts.deny   ld.so.conf     machine-id                networks       rc.local          services       sudo.conf           yum.conf
anacrontab   DIR_COLORS               favicon.png         grub2.cfg    inittab      libaudit.conf  magic                     nsswitch.conf  redhat-release    sestatus.conf  sudoers
asound.conf  DIR_COLORS.256color      filesystems         gshadow      inputrc      libuser.conf   makedumpfile.conf.sample  os-release     resolv.conf       shadow         sudo-ldap.conf
bashrc       DIR_COLORS.lightbgcolor  fstab               gshadow-     issue        locale.conf    man_db.conf               passwd         resolv.conf.save  shadow-        sysctl.conf
cron.deny    dracut.conf              GeoIP.conf          host.conf    issue    localtime      mke2fs.conf               passwd-        rpc               shells         system-release
crontab      e2fsck.conf              GeoIP.conf.default  hostname     kdump.conf   login.defs     motd                      printcap       rsyslog.conf      statetab       system-release-cpe

云主机启动流程:

接下来我们还可以尝试用kolla-ansible来部署openstack,它是使用ansible 的方式部署,速度是比较快的,用docker 的方式启用云主机。

本文标签: 镜像 主机 openstack Cinder