<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Scenarios on Krkn</title><link>https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/</link><description>Recent content in Scenarios on Krkn</description><generator>Hugo</generator><language>en</language><lastBuildDate>Fri, 06 Mar 2026 14:30:51 -0500</lastBuildDate><atom:link href="https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/index.xml" rel="self" type="application/rss+xml"/><item><title>Krkn-Hub All Scenarios Variables</title><link>https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/all-scenario-env/</link><pubDate>Thu, 05 Jan 2017 00:00:00 +0000</pubDate><guid>https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/all-scenario-env/</guid><description>&lt;p>These variables are to be used for the top level configuration template that are shared by all the scenarios in Krkn-hub&lt;/p>
&lt;p>See the description and default values below&lt;/p>
&lt;h4 id="supported-parameters-for-all-scenarios-in-krkn-hub">
 Supported parameters for all scenarios in Krkn-Hub
 &lt;a class="td-heading-self-link" href="#supported-parameters-for-all-scenarios-in-krkn-hub" aria-label="Heading self-link">&lt;/a>
&lt;/h4>
&lt;p>The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:&lt;/p>
&lt;p>example:
&lt;code>export &amp;lt;parameter_name&amp;gt;=&amp;lt;value&amp;gt;&lt;/code>&lt;/p>
&lt;table>
 &lt;thead>
 &lt;tr>
 &lt;th>Parameter&lt;/th>
 &lt;th>Description&lt;/th>
 &lt;th>Default&lt;/th>
 &lt;/tr>
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
 &lt;td>CERBERUS_ENABLED&lt;/td>
 &lt;td>Set this to true if cerberus is running and monitoring the cluster&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>CERBERUS_URL&lt;/td>
 &lt;td>URL to poll for the go/no-go signal&lt;/td>
 &lt;td>http://0.0.0.0:8080&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>WAIT_DURATION&lt;/td>
 &lt;td>Duration in seconds to wait between each chaos scenario&lt;/td>
 &lt;td>60&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>ITERATIONS&lt;/td>
 &lt;td>Number of times to execute the scenarios&lt;/td>
 &lt;td>1&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>DAEMON_MODE&lt;/td>
 &lt;td>Iterations are set to infinity which means that the kraken will cause chaos forever&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>PUBLISH_KRAKEN_STATUS&lt;/td>
 &lt;td>If you want&lt;/td>
 &lt;td>True&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>SIGNAL_ADDRESS&lt;/td>
 &lt;td>Address to print kraken status to&lt;/td>
 &lt;td>0.0.0.0&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>PORT&lt;/td>
 &lt;td>Port to print kraken status to&lt;/td>
 &lt;td>8081&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>SIGNAL_STATE&lt;/td>
 &lt;td>Waits for the RUN signal when set to PAUSE before running the scenarios, refer &lt;a href="https://deploy-preview-247--krkn-chaos.netlify.app/docs/krkn/signal/">docs&lt;/a> for more details&lt;/td>
 &lt;td>RUN&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>DEPLOY_DASHBOARDS&lt;/td>
 &lt;td>Deploys mutable grafana loaded with dashboards visualizing performance metrics pulled from in-cluster prometheus. The dashboard will be exposed as a route.&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>CAPTURE_METRICS&lt;/td>
 &lt;td>Captures metrics as specified in the profile from in-cluster prometheus. Default metrics captures are listed &lt;a href="https://github.com/krkn-chaos/krkn/blob/master/config/metrics-aggregated.yaml">here&lt;/a>&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>ENABLE_ALERTS&lt;/td>
 &lt;td>Evaluates expressions from in-cluster prometheus and exits 0 or 1 based on the severity set. &lt;a href="https://github.com/krkn-chaos/krkn/blob/master/config/alerts.yaml">Default profile&lt;/a>.&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>ALERTS_PATH&lt;/td>
 &lt;td>Path to the alerts file to use when ENABLE_ALERTS is set&lt;/td>
 &lt;td>config/alerts&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>ELASTIC_SERVER&lt;/td>
 &lt;td>Be able to track telemtry data in elasticsearch, this is the url of the elasticsearch data storage&lt;/td>
 &lt;td>&lt;em>blank&lt;/em>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>ELASTIC_INDEX&lt;/td>
 &lt;td>Elastic search index pattern to post results to&lt;/td>
 &lt;td>&lt;em>blank&lt;/em>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>HEALTH_CHECK_URL&lt;/td>
 &lt;td>URL to continually check and detect downtimes&lt;/td>
 &lt;td>&lt;em>blank&lt;/em>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>HEALTH_CHECK_INTERVAL&lt;/td>
 &lt;td>Interval at which to get&lt;/td>
 &lt;td>2&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>HEALTH_CHECK_BEARER_TOKEN&lt;/td>
 &lt;td>Bearer token used for authenticating into health check URL&lt;/td>
 &lt;td>&lt;em>blank&lt;/em>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>HEALTH_CHECK_AUTH&lt;/td>
 &lt;td>Tuple of (username,password) used for authenticating into health check URL&lt;/td>
 &lt;td>&lt;em>blank&lt;/em>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>HEALTH_CHECK_EXIT_ON_FAILURE&lt;/td>
 &lt;td>If value is True exits when health check failed for application, values can be True/False&lt;/td>
 &lt;td>&lt;em>blank&lt;/em>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>HEALTH_CHECK_VERIFY&lt;/td>
 &lt;td>Health check URL SSL validation; can be True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>KUBE_VIRT_CHECK_INTERVAL&lt;/td>
 &lt;td>Interval at which to test kubevirt connections&lt;/td>
 &lt;td>2&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>KUBE_VIRT_NAMESPACE&lt;/td>
 &lt;td>Namespace to find VMIs in and watch&lt;/td>
 &lt;td>&lt;em>blank&lt;/em>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>KUBE_VIRT_NAME&lt;/td>
 &lt;td>Regex style name to match VMIs to watch&lt;/td>
 &lt;td>&lt;em>blank&lt;/em>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>KUBE_VIRT_FAILURES&lt;/td>
 &lt;td>If value is True exits will only report when ssh connections fail to vmi, values can be True/False&lt;/td>
 &lt;td>&lt;em>blank&lt;/em>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>KUBE_VIRT_DISCONNECTED&lt;/td>
 &lt;td>Use disconnected check by passing cluster API, can be True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>KUBE_VIRT_NODE_NAME&lt;/td>
 &lt;td>If set, will filter vms further to only track ones that are on specified node name&lt;/td>
 &lt;td>&lt;em>blank&lt;/em>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>KUBE_VIRT_EXIT_ON_FAIL&lt;/td>
 &lt;td>Fails run if VMs still have false status at end of run, can be True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>KUBE_VIRT_SSH_NODE&lt;/td>
 &lt;td>If set, will be a backup way to ssh to a node. Will want to set to a node that isn&amp;rsquo;t targeted in chaos&lt;/td>
 &lt;td>&lt;em>blank&lt;/em>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>CHECK_CRITICAL_ALERTS&lt;/td>
 &lt;td>When enabled will check prometheus for critical alerts firing post chaos&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_ENABLED&lt;/td>
 &lt;td>Enable/disables the telemetry collection feature&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_API_URL&lt;/td>
 &lt;td>telemetry service endpoint&lt;/td>
 &lt;td>&lt;a href="https://ulnmf9xv7j.execute-api.us-west-2.amazonaws.com/production">https://ulnmf9xv7j.execute-api.us-west-2.amazonaws.com/production&lt;/a>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_USERNAME&lt;/td>
 &lt;td>telemetry service username&lt;/td>
 &lt;td>redhat-chaos&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_PASSWORD&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>No default&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_PROMETHEUS_BACKUP&lt;/td>
 &lt;td>enables/disables prometheus data collection&lt;/td>
 &lt;td>True&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMTRY_FULL_PROMETHEUS_BACKUP&lt;/td>
 &lt;td>if is set to False only the /prometheus/wal folder will be downloaded&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_BACKUP_THREADS&lt;/td>
 &lt;td>number of telemetry download/upload threads&lt;/td>
 &lt;td>5&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_ARCHIVE_PATH&lt;/td>
 &lt;td>local path where the archive files will be temporarly stored&lt;/td>
 &lt;td>/tmp&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_MAX_RETRIES&lt;/td>
 &lt;td>maximum number of upload retries (if 0 will retry forever)&lt;/td>
 &lt;td>0&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_RUN_TAG&lt;/td>
 &lt;td>if set, this will be appended to the run folder in the bucket (useful to group the runs&lt;/td>
 &lt;td>chaos&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_GROUP&lt;/td>
 &lt;td>if set will archive the telemetry in the S3 bucket on a folder named after the value&lt;/td>
 &lt;td>default&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_ARCHIVE_SIZE&lt;/td>
 &lt;td>the size of the prometheus data archive size in KB. The lower the size of archive is&lt;/td>
 &lt;td>1000&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_LOGS_BACKUP&lt;/td>
 &lt;td>Logs backup to s3&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_FILTER_PATTER&lt;/td>
 &lt;td>Filter logs based on certain time stamp patterns&lt;/td>
 &lt;td>[&amp;quot;(\w{3}\s\d{1,2}\s\d{2}:\d{2}:\d{2}\.\d+).+&amp;quot;,&amp;ldquo;kinit (\d+/\d+/\d+\s\d{2}:\d{2}:\d{2})\s+&amp;rdquo;,&amp;quot;(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+Z).+&amp;quot;]&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>TELEMETRY_CLI_PATH&lt;/td>
 &lt;td>OC Cli path, if not specified will be search in $PATH&lt;/td>
 &lt;td>&lt;em>blank&lt;/em>&lt;/td>
 &lt;/tr>
 &lt;/tbody>
&lt;/table>


&lt;div class="alert alert-primary" role="alert">
&lt;h4 class="alert-heading">Note&lt;/h4>

 For setting the TELEMETRY_ARCHIVE_SIZE,the higher the number of archive files will be produced and uploaded (and processed by backup_thread simultaneously).For unstable/slow connection is better to keep this value low increasing the number of backup_threads, in this way, on upload failure, the retry will happen only on the failed chunk without affecting the whole upload.

&lt;/div></description></item><item><title>Krknctl All Scenarios Variables</title><link>https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/all-scenario-env-krknctl/</link><pubDate>Thu, 05 Jan 2017 00:00:00 +0000</pubDate><guid>https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/all-scenario-env-krknctl/</guid><description>&lt;p>These variables are to be used for the top level configuration template that are shared by all the scenarios in Krknctl&lt;/p>
&lt;p>See the description and default values below&lt;/p>
&lt;h4 id="supported-parameters-for-all-scenarios-in-krknctl">
 Supported parameters for all scenarios in KrknCtl
 &lt;a class="td-heading-self-link" href="#supported-parameters-for-all-scenarios-in-krknctl" aria-label="Heading self-link">&lt;/a>
&lt;/h4>
&lt;p>The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:&lt;/p>
&lt;p>&lt;strong>Usage example:&lt;/strong>
&lt;code>--&amp;lt;parameter&amp;gt; &amp;lt;value&amp;gt;&lt;/code>&lt;/p>
&lt;style>
.wide-params-table table {
 width: 100%;
 table-layout: fixed;
}
.wide-params-table th,
.wide-params-table td {
 padding: 12px 16px;
 vertical-align: top;
 word-wrap: break-word;
 word-break: break-word;
 overflow-wrap: break-word;
}
.wide-params-table th:nth-child(1),
.wide-params-table td:nth-child(1) {
 width: 18%;
}
.wide-params-table th:nth-child(2),
.wide-params-table td:nth-child(2) {
 width: 28%;
}
.wide-params-table th:nth-child(3),
.wide-params-table td:nth-child(3) {
 width: 10%;
}
.wide-params-table th:nth-child(4),
.wide-params-table td:nth-child(4) {
 width: 14%;
}
.wide-params-table th:nth-child(5),
.wide-params-table td:nth-child(5) {
 width: 30%;
}
&lt;/style>
&lt;div class="wide-params-table">
&lt;table>
 &lt;thead>
 &lt;tr>
 &lt;th>Parameter&lt;/th>
 &lt;th>Description&lt;/th>
 &lt;th>Type&lt;/th>
 &lt;th>Possible Values&lt;/th>
 &lt;th>Default&lt;/th>
 &lt;/tr>
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
 &lt;td>&amp;ndash;cerberus-enabled&lt;/td>
 &lt;td>Enables Cerberus Support&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;cerberus-url&lt;/td>
 &lt;td>Cerberus http url&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>http://0.0.0.0:8080&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;distribution&lt;/td>
 &lt;td>Selects the orchestrator distribution&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>openshift/kubernetes&lt;/td>
 &lt;td>openshift&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;krkn-kubeconfig&lt;/td>
 &lt;td>Sets the path where krkn will search for kubeconfig in container&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>/home/krkn/.kube/config&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;wait-duration&lt;/td>
 &lt;td>Waits for a certain amount of time after the scenario&lt;/td>
 &lt;td>number&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>1&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;iterations&lt;/td>
 &lt;td>Number of times the same chaos scenario will be executed&lt;/td>
 &lt;td>number&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>1&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;daemon-mode&lt;/td>
 &lt;td>If set the scenario will execute forever&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;uuid&lt;/td>
 &lt;td>Sets krkn run uuid instead of generating it&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>-&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;capture-metrics&lt;/td>
 &lt;td>Enables metrics capture&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;enable-alerts&lt;/td>
 &lt;td>Enables cluster alerts check&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;alerts-path&lt;/td>
 &lt;td>Allows to specify a different alert file path&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>config/alerts.yaml&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;metrics-path&lt;/td>
 &lt;td>Allows to specify a different metrics file path&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>config/metrics-aggregated.yaml&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;enable-es&lt;/td>
 &lt;td>Enables elastic search data collection&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;es-server&lt;/td>
 &lt;td>Elasticsearch instance URL&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>http://0.0.0.0&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;es-port&lt;/td>
 &lt;td>Elasticsearch instance port&lt;/td>
 &lt;td>number&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>443&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;es-username&lt;/td>
 &lt;td>Elasticsearch instance username&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>elastic&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;es-password&lt;/td>
 &lt;td>Elasticsearch instance password&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>-&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;es-verify-certs&lt;/td>
 &lt;td>Enables elasticsearch TLS certificate verification&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;es-metrics-index&lt;/td>
 &lt;td>Index name for metrics in Elasticsearch&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>krkn-metrics&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;es-alerts-index&lt;/td>
 &lt;td>Index name for alerts in Elasticsearch&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>krkn-alerts&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;es-telemetry-index&lt;/td>
 &lt;td>Index name for telemetry in Elasticsearch&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>krkn-telemetry&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;check-critical-alerts&lt;/td>
 &lt;td>Enables checking for critical alerts&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-enabled&lt;/td>
 &lt;td>Enables telemetry support&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-api-url&lt;/td>
 &lt;td>API endpoint for telemetry data&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>&lt;a href="https://ulnmf9xv7j.execute-api.us-west-2.amazonaws.com/production">https://ulnmf9xv7j.execute-api.us-west-2.amazonaws.com/production&lt;/a>&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-username&lt;/td>
 &lt;td>Username for telemetry authentication&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>redhat-chaos&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-password&lt;/td>
 &lt;td>Password for telemetry authentication&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>-&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-prometheus-backup&lt;/td>
 &lt;td>Enables Prometheus backup for telemetry&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>True&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-full-prometheus-backup&lt;/td>
 &lt;td>Enables full Prometheus backup for telemetry&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-backup-threads&lt;/td>
 &lt;td>Number of threads for telemetry backup&lt;/td>
 &lt;td>number&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>5&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-archive-path&lt;/td>
 &lt;td>Path to save telemetry archive&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>/tmp&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-max-retries&lt;/td>
 &lt;td>Maximum retries for telemetry operations&lt;/td>
 &lt;td>number&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>0&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-run-tag&lt;/td>
 &lt;td>Tag for telemetry run&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>chaos&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-group&lt;/td>
 &lt;td>Group name for telemetry data&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>default&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-archive-size&lt;/td>
 &lt;td>Maximum size for telemetry archives&lt;/td>
 &lt;td>number&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>1000&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-logs-backup&lt;/td>
 &lt;td>Enables logs backup for telemetry&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-filter-pattern&lt;/td>
 &lt;td>Filter pattern for telemetry logs&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>[&amp;quot;\w{3}\s\d{1,2}\s\d{2}:\d{2}:\d{2}\.\d+&amp;quot;, &amp;ldquo;kinit (\d+/\d+/\d+\s\d{2}:\d{2}:\d{2}&amp;rdquo;, &amp;ldquo;\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+Z&amp;rdquo;]&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-cli-path&lt;/td>
 &lt;td>Path to telemetry CLI tool (oc)&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>-&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;telemetry-events-backup&lt;/td>
 &lt;td>Enables events backup for telemetry&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>True&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;health-check-interval&lt;/td>
 &lt;td>How often to check the health check urls (seconds)&lt;/td>
 &lt;td>number&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>2&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;health-check-url&lt;/td>
 &lt;td>URL to check the health of&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>-&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;health-check-auth&lt;/td>
 &lt;td>Authentication tuple to authenticate into health check URL&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>-&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;health-check-bearer-token&lt;/td>
 &lt;td>Bearer token to authenticate into health check URL&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>-&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;health-check-exit&lt;/td>
 &lt;td>Exit on failure when health check URL is not able to connect&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>-&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;health-check-verify&lt;/td>
 &lt;td>SSL Verification to authenticate into health check URL&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>false&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;kubevirt-check-interval&lt;/td>
 &lt;td>How often to check the KubeVirt VMs SSH status (seconds)&lt;/td>
 &lt;td>number&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>2&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;kubevirt-namespace&lt;/td>
 &lt;td>KubeVirt namespace to check the health of&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>-&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;kubevirt-name&lt;/td>
 &lt;td>KubeVirt regex names to watch&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>-&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;kubevirt-only-failures&lt;/td>
 &lt;td>KubeVirt checks only report if failure occurs&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>false&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;kubevirt-disconnected&lt;/td>
 &lt;td>KubeVirt checks in disconnected mode, bypassing the cluster&amp;rsquo;s API&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>false&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;kubevirt-ssh-node&lt;/td>
 &lt;td>KubeVirt backup node to SSH into when checking VMI IP address status&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>false&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;kubevirt-exit-on-failure&lt;/td>
 &lt;td>KubeVirt fails run if VMs still have false status&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>false&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;kubevirt-node-node&lt;/td>
 &lt;td>Only track VMs in KubeVirt on given node name&lt;/td>
 &lt;td>string&lt;/td>
 &lt;td>-&lt;/td>
 &lt;td>false&lt;/td>
 &lt;/tr>
 &lt;tr>
 &lt;td>&amp;ndash;krkn-debug&lt;/td>
 &lt;td>Enables debug mode for Krkn&lt;/td>
 &lt;td>enum&lt;/td>
 &lt;td>True/False&lt;/td>
 &lt;td>False&lt;/td>
 &lt;/tr>
 &lt;/tbody>
&lt;/table>
&lt;/div> 


&lt;div class="alert alert-primary" role="alert">
&lt;h4 class="alert-heading">Note&lt;/h4>

 For setting the TELEMETRY_ARCHIVE_SIZE,the higher the number of archive files will be produced and uploaded (and processed by backup_thread simultaneously| .For unstable/slow connection is better to keep this value low increasing the number of backup_threads, in this way, on upload failure, the retry will happen only on the failed chunk without affecting the whole upload.

&lt;/div></description></item><item><title>Supported Cloud Providers</title><link>https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/cloud_setup/</link><pubDate>Thu, 05 Jan 2017 00:00:00 +0000</pubDate><guid>https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/cloud_setup/</guid><description>&lt;ul>
&lt;li>&lt;a href="https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/cloud_setup/#aws">AWS&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/cloud_setup/#gcp">GCP&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/cloud_setup/#openstack">Openstack&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/cloud_setup/#azure">Azure&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/cloud_setup/#alibaba">Alibaba&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/cloud_setup/#vmware">VMware&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/cloud_setup/#ibmcloud">IBMCloud&lt;/a>&lt;/li>
&lt;/ul>
&lt;h2 id="aws">
 AWS
 &lt;a class="td-heading-self-link" href="#aws" aria-label="Heading self-link">&lt;/a>
&lt;/h2>
&lt;p>&lt;strong>NOTE&lt;/strong>: For clusters with AWS make sure &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">AWS CLI&lt;/a> is installed and properly &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html">configured&lt;/a> using an AWS account. This should set a configuration file at &lt;code>$HOME/.aws/config&lt;/code> for your the AWS account. If you have multiple profiles configured on AWS, you can change the profile by setting &lt;code>export AWS_DEFAULT_PROFILE=&amp;lt;profile-name&amp;gt;&lt;/code>&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-bash" data-lang="bash">&lt;span class="line">&lt;span class="cl">&lt;span class="nb">export&lt;/span> &lt;span class="nv">AWS_DEFAULT_REGION&lt;/span>&lt;span class="o">=&lt;/span>&amp;lt;aws-region&amp;gt;
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>This configuration will work for self managed AWS, ROSA and Rosa-HCP&lt;/p></description></item><item><title>ManagedCluster Scenarios</title><link>https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/managed-cluster-scenario/managed-cluster-scenario/</link><pubDate>Wed, 04 Jan 2017 00:00:00 +0000</pubDate><guid>https://deploy-preview-247--krkn-chaos.netlify.app/docs/scenarios/managed-cluster-scenario/managed-cluster-scenario/</guid><description>&lt;p>&lt;a href="https://open-cluster-management.io/concepts/managedcluster/">ManagedCluster&lt;/a> scenarios provide a way to integrate kraken with &lt;a href="https://open-cluster-management.io/">Open Cluster Management (OCM)&lt;/a> and &lt;a href="https://www.redhat.com/en/technologies/management/advanced-cluster-management">Red Hat Advanced Cluster Management for Kubernetes (ACM)&lt;/a>.&lt;/p>
&lt;p>ManagedCluster scenarios leverage &lt;a href="https://open-cluster-management.io/concepts/manifestwork/">ManifestWorks&lt;/a> to inject faults into the ManagedClusters.&lt;/p>
&lt;p>The following ManagedCluster chaos scenarios are supported:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>managedcluster_start_scenario&lt;/strong>: Scenario to start the ManagedCluster instance.&lt;/li>
&lt;li>&lt;strong>managedcluster_stop_scenario&lt;/strong>: Scenario to stop the ManagedCluster instance.&lt;/li>
&lt;li>&lt;strong>managedcluster_stop_start_scenario&lt;/strong>: Scenario to stop and then start the ManagedCluster instance.&lt;/li>
&lt;li>&lt;strong>start_klusterlet_scenario&lt;/strong>: Scenario to start the klusterlet of the ManagedCluster instance.&lt;/li>
&lt;li>&lt;strong>stop_klusterlet_scenario&lt;/strong>: Scenario to stop the klusterlet of the ManagedCluster instance.&lt;/li>
&lt;li>&lt;strong>stop_start_klusterlet_scenario&lt;/strong>: Scenario to stop and start the klusterlet of the ManagedCluster instance.&lt;/li>
&lt;/ol>
&lt;p>ManagedCluster scenarios can be injected by placing the ManagedCluster scenarios config files under &lt;code>managedcluster_scenarios&lt;/code> option in the Kraken config. Refer to &lt;a href="https://github.com/redhat-chaos/krkn/blob/main/scenarios/kube/managedcluster_scenarios_example.yml">managedcluster_scenarios_example&lt;/a> config file.&lt;/p></description></item></channel></rss>