Difference between revisions of "OpenStack:Rally"
(→References: Add a link to docs describing task components) |
(Add docs on running Tempest verification/validation with Rally) |
||
| (One intermediate revision by the same user not shown) | |||
| Line 80: | Line 80: | ||
$ rally task export <task-uuid> --type junit-xml --to ~/output.xml | $ rally task export <task-uuid> --type junit-xml --to ~/output.xml | ||
</nowiki> | </nowiki> | ||
| + | |||
| + | == Tempest validation/verification with Rally == | ||
| + | First of all, just like before, pull the <code>rally-openstack</code> image, run a container off of it, exec into the container, source admin credentials and create an environment. | ||
| + | |||
| + | Then create a Tempest-based verifier: | ||
| + | |||
| + | <nowiki> | ||
| + | $ rally verify create-verifier --type tempest --version 23.0.0 --name <arbitrary-name-for-the-verifier> | ||
| + | $ rally verify list-verifiers | ||
| + | </nowiki> | ||
| + | Run <code>rally verify create-verifier --help</code> to see what extra options are available. | ||
| + | |||
| + | WARNING: If the <code>--version</code> flag is not specified, the latest master revision of Tempest will be installed. This is not advisable as it may result in problems where Tempest's API drifts too much from what Rally expects. | ||
| + | |||
| + | NOTE: Run <code>rally verify update-verifier --version <new-version-number></code> to change the version used by your current verifier. | ||
| + | |||
| + | With the verifier created, you can start running tests with <code>rally verify start <args></code>. | ||
| + | |||
| + | For example, this command will run a smoke test: | ||
| + | |||
| + | <nowiki> | ||
| + | $ rally verify start --pattern 'set=smoke' | ||
| + | </nowiki> | ||
| + | Plenty of predefined test sets are available: 'identity', 'compute', 'baremetal', etc. | ||
| + | |||
| + | You can also run a list of tests specified in a file. | ||
| + | To do this, first check which test are available by running: | ||
| + | |||
| + | <nowiki> | ||
| + | $ rally verify list-verifier-tests | ||
| + | </nowiki> | ||
| + | Copy lines with tests you want to run to a text file and then start verification: | ||
| + | |||
| + | <nowiki> | ||
| + | $ cat <location-of-the-file-with-tests> | ||
| + | tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors[id-e36c0eaa-dff5-4082-ad1f-3f9a80aa3f59,smoke] | ||
| + | tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_with_detail[id-6e85fde4-b3cd-4137-ab72-ed5f418e8c24] | ||
| + | $ rally verify start --load-list <location-of-the-file-with-tests> | ||
| + | </nowiki> | ||
| + | |||
| + | NOTE: You can use a list like this one: https://refstack.openstack.org/api/v1/guidelines/2020.06/tests?type=required but make sure that lines match exactly the ones outputted by <code>rally verify list-verifier-tests</code>. For example: | ||
| + | |||
| + | <nowiki> | ||
| + | tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors[id-e36c0eaa-dff5-4082-ad1f-3f9a80aa3f59] | ||
| + | </nowiki> | ||
| + | is not the same as: | ||
| + | |||
| + | <nowiki> | ||
| + | tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors[id-e36c0eaa-dff5-4082-ad1f-3f9a80aa3f59,smoke] | ||
| + | </nowiki> | ||
| + | |||
| + | Run <code>rally verify start --help</code> to see extra options. | ||
| + | |||
| + | After your verification is done, you can save a report with results: | ||
| + | |||
| + | <nowiki> | ||
| + | $ rally verify report --type html-static --to <output-filename>.html | ||
| + | </nowiki> | ||
| + | |||
| + | [[File:Rally verify sample report.png|800px]] | ||
== Rally-based test/validation suite == | == Rally-based test/validation suite == | ||
| Line 91: | Line 151: | ||
The general idea behind the suite is that the Client code (also known as the "Runner") has to be deployed and set up on a node with OpenStack credentials in the environment that one wishes to test. This Client then runs multiple test scenarios and sends a report to the Server, which then gathers and serves reports from multiple environments for viewing. | The general idea behind the suite is that the Client code (also known as the "Runner") has to be deployed and set up on a node with OpenStack credentials in the environment that one wishes to test. This Client then runs multiple test scenarios and sends a report to the Server, which then gathers and serves reports from multiple environments for viewing. | ||
| + | |||
| + | === TODOs === | ||
| + | |||
| + | * Move config parameters out of the code and into a config file. | ||
| + | * Run Tempest validation tests from acceptance lists, like this one: https://refstack.openstack.org/api/v1/guidelines/2020.06/tests?type=required (docs: https://rally.readthedocs.io/en/latest/verification/index.html) | ||
| + | * Find a method to add vScaler branding to HTML reports generated by Rally. | ||
| + | * Deploy the server container on a dedicated instance (with a DNS record). | ||
== References == | == References == | ||
| Line 101: | Line 168: | ||
# https://gitlab.vscaler.com/mkarpiarz/rally-test-suite | # https://gitlab.vscaler.com/mkarpiarz/rally-test-suite | ||
# https://rally.readthedocs.io/en/latest/task/index.html | # https://rally.readthedocs.io/en/latest/task/index.html | ||
| + | # https://rally.readthedocs.io/en/latest/verification/verifiers.html#tempest | ||
| + | # https://rally.readthedocs.io/en/latest/verification/cli_reference.html | ||
Latest revision as of 14:44, 3 November 2020
Using containerised Rally client
Rally needs OpenStack admin credentials, so from a machine storing the admin openrc file, run the following commands:
# docker run --rm -it -h rally-testing --entrypoint /bin/bash -v <your-admin-openrc-file>:/openrc:ro registry.vscaler.com:5000/rally-openstack:1.7.0 $ source /openrc $ rally env create --from-sysenv --name=<arbitrary-name-for-environment>
The last command will also create a deployment object with the same name - run rally deployment list to see it.
NOTE: When using non-containerised Rally (for example installed with pip directly), you will need to create a database before creating an environment. To do this simply run rally db create - this will create an SQLite database in /tmp/rally.sqlite on the host.
Then, look into /home/rally/source/samples/tasks/scenarios/ for sample scenarios or write your own scenario. For example:
$ cat ~/test_boot_and_delete_instance.json
{
"NovaServers.boot_and_delete_server": [
{
"args": {
"flavor": {
"name": "m1.small"
},
"image": {
"name": "centos7-1907"
},
"force_delete": false
},
"runner": {
"type": "constant",
"times": 10,
"concurrency": 2
},
"context": {
"users": {
"tenants": 3,
"users_per_tenant": 2
}
}
}
]
}
NOTE: All test scenarios are also available here: https://opendev.org/openstack/rally-openstack/src/branch/master/samples/tasks/scenarios.
List available plugins and scenarios:
$ rally plugin list --platform openstack
Finally, run the test:
$ rally task start /path/to/your/scenario.json
Visualising test results
First off, you'll need the UUID of the task you want to export results from:
$ rally task list --deployment <your-deployment-name>
Export to a HTML file:
$ rally task report <task-uuid> --out ~/output.html
Export to a JUnit XML (this file can then be used by Jenkins):
$ rally plugin list --platform openstack | grep junit ... | TaskExporter | junit-xml | default | Generates task report in JUnit-XML format. | ... $ rally task export <task-uuid> --type junit-xml --to ~/output.xml
Tempest validation/verification with Rally
First of all, just like before, pull the rally-openstack image, run a container off of it, exec into the container, source admin credentials and create an environment.
Then create a Tempest-based verifier:
$ rally verify create-verifier --type tempest --version 23.0.0 --name <arbitrary-name-for-the-verifier> $ rally verify list-verifiers
Run rally verify create-verifier --help to see what extra options are available.
WARNING: If the --version flag is not specified, the latest master revision of Tempest will be installed. This is not advisable as it may result in problems where Tempest's API drifts too much from what Rally expects.
NOTE: Run rally verify update-verifier --version <new-version-number> to change the version used by your current verifier.
With the verifier created, you can start running tests with rally verify start <args>.
For example, this command will run a smoke test:
$ rally verify start --pattern 'set=smoke'
Plenty of predefined test sets are available: 'identity', 'compute', 'baremetal', etc.
You can also run a list of tests specified in a file. To do this, first check which test are available by running:
$ rally verify list-verifier-tests
Copy lines with tests you want to run to a text file and then start verification:
$ cat <location-of-the-file-with-tests> tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors[id-e36c0eaa-dff5-4082-ad1f-3f9a80aa3f59,smoke] tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_with_detail[id-6e85fde4-b3cd-4137-ab72-ed5f418e8c24] $ rally verify start --load-list <location-of-the-file-with-tests>
NOTE: You can use a list like this one: https://refstack.openstack.org/api/v1/guidelines/2020.06/tests?type=required but make sure that lines match exactly the ones outputted by rally verify list-verifier-tests. For example:
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors[id-e36c0eaa-dff5-4082-ad1f-3f9a80aa3f59]
is not the same as:
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors[id-e36c0eaa-dff5-4082-ad1f-3f9a80aa3f59,smoke]
Run rally verify start --help to see extra options.
After your verification is done, you can save a report with results:
$ rally verify report --type html-static --to <output-filename>.html
Rally-based test/validation suite
The code for the Rally-based test suite is available here:
https://gitlab.vscaler.com/mkarpiarz/rally-test-suite
Read the README file for details.
The general idea behind the suite is that the Client code (also known as the "Runner") has to be deployed and set up on a node with OpenStack credentials in the environment that one wishes to test. This Client then runs multiple test scenarios and sends a report to the Server, which then gathers and serves reports from multiple environments for viewing.
TODOs
- Move config parameters out of the code and into a config file.
- Run Tempest validation tests from acceptance lists, like this one: https://refstack.openstack.org/api/v1/guidelines/2020.06/tests?type=required (docs: https://rally.readthedocs.io/en/latest/verification/index.html)
- Find a method to add vScaler branding to HTML reports generated by Rally.
- Deploy the server container on a dedicated instance (with a DNS record).
References
- https://rally.readthedocs.io/en/latest/quick_start/tutorial.html
- https://rally.readthedocs.io/en/latest/quick_start/tutorial/step_1_setting_up_env_and_running_benchmark_from_samples.html
- https://github.com/openstack/rally/tree/stable/0.12/samples/tasks/scenarios
- https://opendev.org/openstack/rally-openstack
- https://opendev.org/openstack/rally-openstack/src/branch/master/samples/tasks/scenarios
- https://gitlab.vscaler.com/mkarpiarz/rally-test-suite
- https://rally.readthedocs.io/en/latest/task/index.html
- https://rally.readthedocs.io/en/latest/verification/verifiers.html#tempest
- https://rally.readthedocs.io/en/latest/verification/cli_reference.html