admin 管理员组

文章数量: 887021


2024年3月6日发(作者:employing)

故障:各ESXi服务器能连上FreeNAS存储,但是做vMotion的时候进度在78%就停下来,提示:Source detected that destination failed to resume

解决:由于ESXi上的SAN存储空间的UUID是通过IP地址和路径计算出来的,经过查找,详细信息如下:

10.0.1.9

~ # ls -l /vmfs/volumes

drwxr-xr-x 1 root root 8 Jan 1 1970 0ecd1f74-38d55d6d-95a2-79a7178280b3

drwxr-xr-x 1 root root 8 Jan 1 1970 2da668ef-40e5d96b-90bf-855ddb9c5547

drwxr-xr-t 1 root root 980 Mar 17 17:02 4d6ea2a6-ff94acc4-4f37-001a4d42857d

drwxr-xr-x 1 root

lrwxr-xr-x 1 root

lrwxr-xr-x 1 root

lrwxr-xr-x 1 root

lrwxr-xr-x 1 root

lrwxr-xr-x 1 root

drwxr-xr-x 1 root

drwxrwxrwx 1 root

10.0.1.7

~ # ls -l /vmfs/volumes

drwxr-xr-x 1 root

drwxr-xr-x 1 root

drwxr-xr-x 1 root

drwxr-xr-x 1 root

drwxr-xr-t 1 root

drwxr-xr-t 1 root

lrwxr-xr-x 1 root

lrwxr-xr-x 1 root

lrwxr-xr-x 1 root

lrwxr-xr-x 1 root

drwxrwxrwx 1 root

lrwxr-xr-x 1 root

lrwxr-xr-x 1 root

root

root

root

root

root

root

root

root

root

root

root

root

root

root

root

root

root

root

root

root

root

8 Jan 1 1970 4d8de181-2ea87eb5-35b0-00e081d4b050

35 Mar 26 13:08 Hypervisor1 -> de64637c-5f781215-485b-3a65fb181338

35 Mar 26 13:08 Hypervisor2 -> 0ecd1f74-38d55d6d-95a2-79a7178280b3

35 Mar 26 13:08 Hypervisor3 -> 2da668ef-40e5d96b-90bf-855ddb9c5547

17 Mar 26 13:08 VMServersStorage1 -> e3c54644-6adc1841

35 Mar 26 13:08 datastore1 -> 4d6ea2a6-ff94acc4-4f37-001a4d42857d

8 Jan 1 1970 de64637c-5f781215-485b-3a65fb181338

1024 Mar 26 2011 e3c54644-6adc1841

8 Jan 1 1970 0a555d73-08bed3c3-9d5a-90e7d3ae6b0c

8 Jan 1 1970 2da668ef-40e5d96b-90bf-855ddb9c5547

8 Jan 1 1970 37a66a70-cd7e15fb-b3cb-c9dc9c65e84c

8 Jan 1 1970 4d829597-1ac77eb8-3971-001a4d42857d

2380 Mar 20 05:02 4d8295a3-c16b3516-6c77-001a4d42857d

2240 Mar 20 04:55 4d829651-6beaef96-95f5-001a4d42857d

35 Mar 26 13:11 Hypervisor1 -> 0a555d73-08bed3c3-9d5a-90e7d3ae6b0c

35 Mar 26 13:11 Hypervisor2 -> 37a66a70-cd7e15fb-b3cb-c9dc9c65e84c

35 Mar 26 13:11 Hypervisor3 -> 2da668ef-40e5d96b-90bf-855ddb9c5547

17 Mar 26 13:11 VMServersStorage1 -> e3c54644-6adc1841

1024 Mar 26 2011 e3c54644-6adc1841

35 Mar 26 13:11 vm-backup1 -> 4d8295a3-c16b3516-6c77-001a4d42857d

35 Mar 26 13:11 vm-backup2 -> 4d829651-6beaef96-95f5-001a4d42857d

10.0.1.8

~ # ls -l /vmfs/volumes

drwxr-xr-x 1 root root 8 Jan 1 1970 2da668ef-40e5d96b-90bf-855ddb9c5547

drwxr-xr-x 1 root root 8 Jan 1 1970 34f290e9-e510916c-438c-17eedaa964c5

drwxrwxrwx 1 root root 1024 Mar 26 2011 5bb6df06-115121ea

lrwxr-xr-x 1 root root 35 Mar 26 13:12 Hypervisor1 -> 34f290e9-e510916c-438c-17eedaa964c5

lrwxr-xr-x 1 root root 35 Mar 26 13:12 Hypervisor2 -> a2f206ec-1ece952b-3ccc-a6dfc62854cc

lrwxr-xr-x 1 root root 35 Mar 26 13:12 Hypervisor3 -> 2da668ef-40e5d96b-90bf-855ddb9c5547

lrwxr-xr-x 1 root root 17 Mar 26 13:12 VMServersStorage1 -> 5bb6df06-115121ea

drwxr-xr-x 1 root root 8 Jan 1 1970 a2f206ec-1ece952b-3ccc-a6dfc62854cc

10.0.1.10

~ # ls -l /vmfs/volumes

drwxr-xr-x 1 root root 8 Jan 1 1970 2bc1807c-40e94d99-7080-142d4239b108

drwxr-xr-x 1 root root 8 Jan 1 1970 4cfb9705-c11342d4-c0ee-00e081d4ae05

drwxr-xr-t 1 root root 1120 Mar 26 03:39 4cfb9715-f01d7dd5-5c14-00e081d4ae05

drwxrwxrwx 1 root root 1024 Mar 26 2011 5bb6df06-115121ea

drwxr-xr-x 1 root root 8 Jan 1 1970 7382f873-3d8ea9ed-2a2e-1e5ff47f7887

lrwxr-xr-x 1 root root 35 Mar 26 13:03 Hypervisor1 -> 7382f873-3d8ea9ed-2a2e-1e5ff47f7887

lrwxr-xr-x 1 root root 35 Mar 26 13:03 Hypervisor2 -> 2bc1807c-40e94d99-7080-142d4239b108

lrwxr-xr-x 1 root root 35 Mar 26 13:03 Hypervisor3 -> e00f98e1-2bcc0c91-e7a2-3487611c1557

lrwxr-xr-x 1 root root 17 Mar 26 13:03 VMServersStorage1 -> 5bb6df06-115121ea

lrwxr-xr-x 1 root root 35 Mar 26 13:03 datastore1 (5) -> 4cfb9715-f01d7dd5-5c14-00e081d4ae05

drwxr-xr-x 1 root root 8 Jan 1 1970 e00f98e1-2bcc0c91-e7a2-3487611c1557

10.0.1.9

lrwxr-xr-x 1 root root 17 Mar 26 13:08 VMServersStorage1 -> e3c54644-6adc1841

~ # esxcfg-nas -l

VMServersStorage1 is /mnt/VMServersStorage1 from 10.0.1.15 mounted

10.0.1.7

lrwxr-xr-x 1 root root 17 Mar 26 13:11 VMServersStorage1 -> e3c54644-6adc1841

~ # esxcfg-nas -l

VMServersStorage1 is /mnt/VMServersStorage1 from 10.0.1.15 mounted

10.0.1.8

lrwxr-xr-x 1 root root 17 Mar 26 13:12 VMServersStorage1 -> 5bb6df06-115121ea

~ # esxcfg-nas -l

VMServersStorage1 is /mnt/VMServersStorage1/ from 10.0.1.15 mounted

10.0.1.10

lrwxr-xr-x 1 root root 17 Mar 26 13:03 VMServersStorage1 -> 5bb6df06-115121ea

~ # esxcfg-nas -l

VMServersStorage1 is /mnt/VMServersStorage1/ from 10.0.1.15 mounted

从上可见,9和7的UUID相同,8和10的UUID相同,但是路径就不相同了,一个是/mnt/VMServersStorage1,一个是/mnt/VMServersStorage1/,区别在于有后面的斜杠和没有后面的斜杠,虽然都能连上存储,但是算出来的UUID不同,就不能做vMotion。基于当时的情况,把9和7的存储卸载,重新用有后面斜杠的地址再连上即可。

/ryejerry/blog/item/

UUDI引起的vmotion错误

2010-09-29 11:52

环境:两个esx主机连接NAS存储(NFS文件系统)

问题:虚拟机在两个esx主机之间vmotion的时候报错:“A general system error occurred: failed to resume on destination VMotion stops at 80% and fails”

解决方案:

Cause

Migration or power on operations may fail due to a mismatch in the datastore UUID. If two ESX hosts represent that datastore using a different UUID in /vmfs/volumes

you may experience issues performing power on or migration operations.

Identifying the issue

To verify if you have a mismatch between UUIDs:

1. Log into the ESX hosts (this may be the source of the migration) as root via SSH or at the console.

2. Run the command:

ls -l /vmfs/volumes

The output appears similar to:

drwxrwxrwt 1 root root 980 May 29 09:25 8687c82b-3e59cbae

lrwxr-xr-x 1 root root 17 Jun 11 12:51 vm_nfs_disk -> 8687c82b-3e59cbae

3. Make note of the UUID (highlighted in red).

4. Log into a different ESX host (this may be the destination of the migration) as root via SSH or at the console.

5. Run the command:

ls -l /vmfs/volumes

The output appears similar to:

drwxrwxrwt 1 root root 980 May 29 09:25 fef0f955-dceeecfc

lrwxr-xr-x 1 root root 17 Jun 11 12:51 vm_nfs_disk -> fef0f955-dceeecfc

6. Make note of the UUID (highlighted in blue).

7. If the values recorded in step 3 and in step 6 are not the same, then you have a UUID mismatch.

Solution

A UUID mismatch between two datastores occurs because the UUID is based on a hash of the NFS Server and Path, as seen by running esxcfg-nas. If you have specified

the NFS server information using different methods on different hosts, then the hash value, and ultimately the UUIDs will be different.

To resolve the mismatch:

1. Log into the ESX hosts (this may be the source of the migration) as root via SSH or at the console.

2. Run the command:

esxcfg-nas -l

The output appears similar to:

nfs_datastore is /vol/nfs_datastore from mounted

Note: The UUID is based on /vol/nfs_datastore and the DNS name .

3. Make note of the method being used to identify the NFS server highlighted in red.

4. Log into the ESX hosts (this may be the source of the migration) as root via SSH or at the console.

5. Run the command:

esxcfg-nas -l

The output appears similar to:

nfs_datastore is /vol/nfs_datastore from 192.168.1.150 mounted

Note: The UUID is based on /vol/nfs_datastore and the IP address 192.168.1.150.

6. Make note of the method being used to identify the NFS server highlighted in blue.

7. Choose one method to identify the NFS server (DNS or IP).

8. Select the host which is not using the method selected in step 7.

9. Connect to the host using VI Client with the appropriate permissions.

10. Power off (or relocate if possible) the virtual machines residing on the NFS datastore which has the mismatch.

11. Remove the NFS datastore.

12. Add the same datastore using the method selected in step 7. For more information, see Creating an NFS-Based Datastore documentation for the applicable

version of VMware product in the ESX Configuration Guide.

13. Repeat the operation until all ESX hosts reference the same NFS server in the same method selected in step 7


本文标签: 斜杠 用有 服务器 提示 路径