Page History
Add_workflow_for_techpubs | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Include Page | ||||
---|---|---|---|---|
|
This section describes the hardware and software requirements for the SBC SWe on an OpenStack platform.
OpenStack Requirements
Multiexcerpt | ||
---|---|---|
| ||
The SBC SWe supports the following OpenStack environments:
|
Server Hardware Requirements
Multiexcerpt | ||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||||||||||||||
|
S-SBC SWe Requirements
Multiexcerpt | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||||
The system hosting the SBC SWe must meet the following requirements to achieve the performance targets listed:
|
M-SBC SWe Requirements
Multiexcerpt | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||||
|
OAM Node Requirements
Multiexcerpt | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
I-SBC SWe Requirements
Multiexcerpt | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
Info | ||
---|---|---|
| ||
For deployments that require it, you can instantiate the SBC SWe in smaller-sized configurations that use limited memory and vCPU resources. However, the limited resources place some restrictions on capacity and capabilities. Refer to Small SBC SWe Deployment Characteristics and Small SBC SWe Configuration Performance Metrics for more information. |
Recommended Host Settings
: Setting disk cache mode to "writethrough"
In order to prevent the SBC from crashing when the compute host power cycle is reset, it is recommended to set the cache mode for the disk type "file
To enable changing disk cache method from "none" to "writethrough
", execute . Execute the following steps on the compute host:1. Enter the following command to retrieve the
Retrieve the current disk cache
modes in the nova configuration on the compute host by executing the following command:
Code Block title Command crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes
Code Block title
Parameter not found: disk_cachemodes.
Sample Output file=none,block=writeback,network=writeback
Info title Note
Note that "
file=none
" in the sample output above. Continue to Step 2 to set "file=writethrough
"
.
Set the disk type "
file
" to "writethrough
" by executing the following command:Code Block title Command crudini --set /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes '
file=writethrough,block=writeback,network=writeback'
Info title Note The cache mode for the two other disk types "
block
" and "network
" are determined by the customer. The available cache modes are "writethrough
", "
writeback
", and "none
".
Restart all nova containers by executing the following command:
Code Block title Command docker restart $(docker ps|grep nova|sed -r 's/^([^ ]+).*/\1/')
Info
353567574c0f
title
3fe492a36297
42f36e21555b
e727b8cf0191
Note The container restart procedure may vary depending on the container management tool. When in doubt, reboot the compute host.
Verify in the nova configuration that the cache mode "
file
" is set to "writethrough
" by executing the following command:Code Block title Command crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes
Code Block title Sample Output file=writethrough,block=writeback,network=writeback
If any VMs/instances exist on the compute host, verify in libvertd that "
cache='writethrough'
" for the disk type "file
" by executing the following command:Code Block title Command virsh dumpxml <instance_name> | grep -A 1 "disk type='file'"
Info title Note In the command above, replace
<instance_name>
with the actual name of the instance.Code Block title Sample Output <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writethrough'/>
4. Verify the changes in libvirtd:
Code Block |
---|
virsh dumpxml instance-0000eb25 |grep "cache" |
Output:
<driver name='qemu' type='raw' cache='writethrough'/>
<driver name='qemu' type='raw' cache='writethrough' discard='unmap'/>
<driver name='qemu' type='raw' cache='writethrough' discard='unmap'/>
Note : To grep the instance id, execute "virsh list --all"
Example:
Code Block |
---|
virsh list --all
Id Name State
----------------------------------------------------
1 instance-0000eb1c running
2 instance-0000eb25 running |
5. Verify the changes in qemu:
Info | ||
---|---|---|
| ||
Before you verify the changes, ensure the instance is in running. |
Code Block |
---|
virsh qemu-monitor-command instance-0000eb25 --pretty '{"execute":"query-block"}'|egrep 'cache|device|filename|no-flush|direct|writeback' |
Output:
"no-flush": false
"direct": false
"writeback": false
6. Check the disk cache in nova configuration on the compute host:
Code Block |
---|
crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt disk_cachemodes |
Output:
"file=writethrough,network=writethrough"