Code Documentation
This file containing the high level interface for implementing verificaiton item classes in library.py
- class checklib.CheckLibBase(df: DataFrame, params=None, results_folder=None)[source]
Bases:
ABC
Abstract class defining interfaces for item-specific verification classes
- add_md(md_file_path, img_folder, relative_path_to_img_in_md, item_dict, plot_option=None, fig_size=(6.4, 4.8))[source]
- property get_checks
- plot(plot_option, plt_pts=None, fig_size=(6.4, 4.8))[source]
default plot function for showing result
- points = None
- result = Empty DataFrame Columns: [] Index: []
- class checklib.ContinuousDimmingCompliance(df: DataFrame, params=None, results_folder=None)[source]
Bases:
CheckLibBase
- add_md(md_file_path, img_folder, relative_path_to_img_in_md, item_dict, plot_option=None, fig_size=(6.4, 4.8))
- all_plot_aio(plt_pts, fig_size)
All in one plot of all samples
- all_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- calculate_plot_day()
- daterange(start_date, end_date)
- day_plot_aio(plt_pts, fig_size)
ALl in one plot for one day
- day_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- flat_min_threshold = 60
- property get_checks
- plot(plot_option, plt_pts=None, fig_size=(6.4, 4.8))
default plot function for showing result
- points = ['Electric_light_power']
- result = Empty DataFrame Columns: [] Index: []
- save_data(csv_path)
- class checklib.EconomizerHeatingCompliance(df: DataFrame, params=None, results_folder=None)[source]
Bases:
RuleCheckBase
- add_md(md_file_path, img_folder, relative_path_to_img_in_md, item_dict, plot_option=None, fig_size=(6.4, 4.8))
- all_plot_aio(plt_pts, fig_size)
All in one plot of all samples
- all_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- calculate_plot_day()
- daterange(start_date, end_date)
- day_plot_aio(plt_pts, fig_size)
ALl in one plot for one day
- day_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- property get_checks
- plot(plot_option, plt_pts=None, fig_size=(6.4, 4.8))
default plot function for showing result
- points = ['OA_min_sys', 'OA_timestep', 'Heat_sys_out']
- result = Empty DataFrame Columns: [] Index: []
- save_data(csv_path)
- class checklib.EconomizerIntegrationCompliance(df: DataFrame, params=None, results_folder=None)[source]
Bases:
RuleCheckBase
- add_md(md_file_path, img_folder, relative_path_to_img_in_md, item_dict, plot_option=None, fig_size=(6.4, 4.8))
- all_plot_aio(plt_pts, fig_size)
All in one plot of all samples
- all_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- calculate_plot_day()
- daterange(start_date, end_date)
- day_plot_aio(plt_pts, fig_size)
ALl in one plot for one day
- day_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- property get_checks
- plot(plot_option, plt_pts=None, fig_size=(6.4, 4.8))
default plot function for showing result
- points = ['OA_min_sys', 'OA_timestep', 'Cool_sys_out']
- result = Empty DataFrame Columns: [] Index: []
- save_data(csv_path)
- class checklib.HeatRecoveryCompliance(df: DataFrame, params=None, results_folder=None)[source]
Bases:
RuleCheckBase
- add_md(md_file_path, img_folder, relative_path_to_img_in_md, item_dict, plot_option=None, fig_size=(6.4, 4.8))
- all_plot_aio(plt_pts, fig_size)
All in one plot of all samples
- all_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- calculate_plot_day()
- daterange(start_date, end_date)
- day_plot_aio(plt_pts, fig_size)
ALl in one plot for one day
- day_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- property get_checks
- plot(plot_option, plt_pts=None, fig_size=(6.4, 4.8))
default plot function for showing result
- points = ['OA_timestep', 'Heat_rec', 'Cool_rec', 'OA_min_sys']
- result = Empty DataFrame Columns: [] Index: []
- save_data(csv_path)
- class checklib.HumidityWithinBoundaries(df: DataFrame, params=None, results_folder=None)[source]
Bases:
RuleCheckBase
- add_md(md_file_path, img_folder, relative_path_to_img_in_md, item_dict, plot_option=None, fig_size=(6.4, 4.8))
- all_plot_aio(plt_pts, fig_size)
All in one plot of all samples
- all_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- calculate_plot_day()
- daterange(start_date, end_date)
- day_plot_aio(plt_pts, fig_size)
ALl in one plot for one day
- day_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- property get_checks
- plot(plot_option, plt_pts=None, fig_size=(6.4, 4.8))
default plot function for showing result
- points = ['Zone_hum', 'Hum_up_bound', 'Hum_low_bound']
- result = Empty DataFrame Columns: [] Index: []
- save_data(csv_path)
- class checklib.RuleCheckBase(df: DataFrame, params=None, results_folder=None)[source]
Bases:
CheckLibBase
- add_md(md_file_path, img_folder, relative_path_to_img_in_md, item_dict, plot_option=None, fig_size=(6.4, 4.8))
- all_plot_aio(plt_pts, fig_size)
All in one plot of all samples
- all_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- calculate_plot_day()
- daterange(start_date, end_date)
- day_plot_aio(plt_pts, fig_size)
ALl in one plot for one day
- day_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- property get_checks
- plot(plot_option, plt_pts=None, fig_size=(6.4, 4.8))
default plot function for showing result
- points = None
- result = Empty DataFrame Columns: [] Index: []
- save_data(csv_path)
- abstract verify()
checking logic implementation, not for user
- class checklib.SimultaneousHeatingCoolingCompliance(df: DataFrame, params=None, results_folder=None)[source]
Bases:
RuleCheckBase
- add_md(md_file_path, img_folder, relative_path_to_img_in_md, item_dict, plot_option=None, fig_size=(6.4, 4.8))
- all_plot_aio(plt_pts, fig_size)
All in one plot of all samples
- all_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- calculate_plot_day()
- daterange(start_date, end_date)
- day_plot_aio(plt_pts, fig_size)
ALl in one plot for one day
- day_plot_obo(plt_pts, fig_size)
One by one plot of all samples
- property get_checks
- plot(plot_option, plt_pts=None, fig_size=(6.4, 4.8))
default plot function for showing result
- points = ['Cool_sys_out', 'Heat_sys_out']
- result = Empty DataFrame Columns: [] Index: []
- save_data(csv_path)
- class api.brick_compliance.BrickCompliance(brick_schema_path: str, brick_instance_path: str, query_statement_path: str = './resources/brick/query_statement.yml', datapoint_name_conversion_path: str = './resources/brick/verification_datapoint_info.yml', perform_reasoning: bool = False)[source]
Bases:
object
Instantiate a BrickCompliance class object and load specified brick schema and brick instance.
- Args:
- brick_schema_path: str
brick schema path (e.g., “../schema/Brick.ttl”)
- brick_instance_path: str
brick instance path (e.g., “../schema/brick_testing.ttl”)
- query_statement_path: str
the query statements file path. The default path is ./resources/brick/query_statement.yml.
- datapoint_name_conversion_path: str
the datapoint conversion name saving yaml file. The default path nis ./resources/brick/verification_datapoint_info.yml.
perform_reasoning: bool argument whether reasoning is performed to the given instance. The default boolean value is False.
- get_applicable_verification_lib_items(verification_lib_item_list: list | None = None)[source]
Get applicable control verification library items among the verification_item_lib_name list from the brick instance.
- Args:
- verification_item_lib_name: list
list of verification item names to be tested. If empty list is provided, all the available verification library items are tested.
- Returns: list
list that includes available verification library item names from the given brick instance.
- query_verification_case_datapoints(verification_item_lib_name: str, energyplus_naming_assembly: bool = True, default_verification_case_values: dict | None = None) list [source]
Query data point(s) for the given verification case.
- Args:
- verification_item_lib_name: str
verification item name(s) that will be queried
- energyplus_naming_assembly: bool
whether the queried datapoint is changed to E+ style variable name or not
- default_verification_case_values: dict
default key values. (“no”, “run_simulation”, “idf”, “idd”, “weather”, “output”, “ep_path”, “expected_result”, “parameters”,) keys must exist.
- Returns:
- self.queried_datapoint_all_dict: dict
dictionary that includes verification item as a key and queried data point list as a value
- query_with_customized_statement(custom_query_statement: str, verification_item_lib_name: str, energyplus_naming_assembly: bool = True, default_verification_case_values: dict | None = None) list [source]
Query datapoints with a customized query statement. When implemented, the quality check of the query_statement is done by checking whether the number of queried variables are the same as the required number of data points in the verification library item.
- Args:
- custom_query_statement: str
query statement written from users
- verification_item_lib_name: str
verification library item of the query_statement
- energyplus_naming_assembly: bool
whether to convert the queried datapoints’ name to EnergyPlus style variable name.
- default_verification_case_values: dict
default key values. (“no”, “run_simulation”, “idf”, “idd”, “weather”, “output”, “ep_path”, “expected_result”, “parameters”,) keys must exist.
- Returns:
queried result in the verification case format. str message from the query_statement’s quality check result.
data_processing.py
Data Processing API
- class api.data_processing.DataProcessing(data_path: str | None = None, data_source: str | None = None, timestamp_column_name: str | None = None)[source]
Bases:
object
Instantiate a data processing object to load datasets and manipulate data before feeding it to the verification process.
- Args:
data (str): Path to the data (CSV format) to be loaded for processing. data_source (str): Data source name. Use EnergyPlus or Other. timestamp_column_name (str): Name of the column header that contains the time series timestamps.
- add_parameter(name: str | None = None, value: float | None = None, inplace: bool = False) None | DataFrame [source]
Add a parameter to data. The parameter will be added as a constant value for all index of data.
- Args:
name (str): Name of the parameter value (float): Value of the parameter. inplace (bool, optional): Modify the dataset directly. Defaults to False.
- Returns:
pd.DataFrame: Modified dataset
- apply_function(variable_names: list | None = None, new_variable_name: str | None = None, function_to_apply: str | None = None, inplace: bool = False) None | DataFrame [source]
Apply an aggregation function to a list of variables from the dataset.
- Args:
variable_names (str): List of variables used as input to the function. All elements in variable_names need to be in self.data.columns new_variable_name (str): Name of the new variable containing the result of the function for each time stamp. function_to_apply (str): Name of the function to apply. Choices are: sum, min, max`or `average (or ‘mean’). inplace (bool, optional): Modify the dataset directly. Defaults to False.
- Returns:
pd.DataFrame: Modified dataset
- check() dict [source]
Perform a sanity check on the data.
- Returns:
Dict: Dictionary showing the number of missing values for each variable as well as the outliers.
- concatenate(datasets: list | None = None, axis: int | None = None, inplace: bool = False) None | DataFrame [source]
Concatenate datasets. Duplicated columns (for horizontal concatenation) or rows (for vertical concatenation) are kept. Column names (for vertical concatenation) or indexes (for horizontal concatenation) need to match exactly.
- Args:
datasets (list): List of datasets (pd.DataFrame) to concatenate with data. axis (int): 1 or 0. 1 performs a vertical concatenation and 0 performs a horizontal concatenation. inplace (bool, optional): Modify the dataset directly. Defaults to False.
- Returns:
pd.DataFrame: Modified dataset
- downsample(frequency_type: str | None = None, number_of_periods: int | None = None, sampling_function: dict | str | None = None, inplace: bool = False) None | DataFrame [source]
Downsample data
- Args:
frequency_type (str): Downsampling frequency. Either ‘day’, ‘hour’, ‘minute’, or ‘second’. number_of_periods (int): Number of frequency used for downsampling. For instance, use 1 and a frequency_type of ‘hour’ to downsample the data to every hour. sampling_function (Union[dict, str], optional): Function to apply during downsampling, either ‘mean’ or ‘sum’ or a dictionary of key value pairs where the keys correspond to all the variables in data and value are either ‘mean’ or sum’. By default, using mean to downsample. inplace (bool, optional): Modify the dataset directly. Defaults to False.
- Returns:
pd.DataFrame: Modified dataset
- fill_missing_values(method: str | None = None, variable_names: list = [], inplace: bool = False) None | DataFrame [source]
Fill missing values (NaN) in data.
- Args:
method (str): Method to use to fill the missing values: ‘linear’ (treat values as equally spaced) or ‘pad’ (use existing values). variable_names (list, optional): List of variable names that need missing values to be filled. By default, fill all missing data in self.data inplace (bool, optional): Modify the dataset directly. Defaults to False.
- Returns:
pd.DataFrame: Modified dataset
- plot(variable_names: list | None = None, kind: str | None = None) Axes | None [source]
Create plots of timesteries data, or scatter plot between two variables
- Args:
variable_names (list): List of variables to plot. The variables must be in the data. kind (str): Type of chart to plot, either’timeseries’, or ‘scatter’. - If ‘timeseries’ is used, all variable names provided in variable_names will be plotted against the index timestamp from data - If ‘scatter’ is used, the first variable provided in the list will be used as the x-axis, the other will be on the y-axis
- Returns:
matplotlib.axes.Axes: Matplotlib axes object
- slice(start_time: datetime, end_time: datetime, inplace: bool = False) None | DataFrame [source]
Discard any data before start_time and after end_time.
- Args:
start_time (datetime): Python datetime object used as the slice start date of the data. end_time (datetime): Python datetime object used as the slice end date of the data. inplace (bool, optional): Modify the dataset directly. Defaults to False.
- Returns:
pd.DataFrame: Modified dataset
reporting.py
Reporting API
- class api.reporting.Reporting(verification_json: str | None = None, result_md_name: str | None = None, report_format: str = 'markdown')[source]
Bases:
object
- Args:
verification_json (str): Path to the result json files after verifications to be loaded for reporting. It can be one JSON file or wildcard for multiple JSON files (e.g., *_md.json). result_md_name (str): Name of the report summary markdown to be saved. All md reports will be created in the same directory as the verification result json files. report_format (str): File format to be output. For now, only markdown format is available. More formats (e.g., html, pdf, csv, etc.) will be added in future releases.
verification_case.py
Verification Case API
- class api.verification_case.VerificationCase(cases: List | None = None, json_case_path: str | None = None)[source]
Bases:
object
Instantiate a verification case class object and load verification case(s) in self.case_suite as a Dict. keys are automatically generated unique id of cases, values are the fully defined verification case Dict. If any argument is invalid, the object instantion will report an error message.
- Args:
cases: (optional) A list of Dict. dictionary that includes verification case(s). json_case_path: (optional) str. path to the verification case file. If the path ends with *.json, then the items in the JSON file are loaded. If the path points to a directory, then verification cases JSON files are loaded.
- static create_verification_case_suite_from_base_case(base_case: Dict | None = None, update_key_value: Dict | None = None, keep_base_case: bool = False) List[Dict] | None [source]
Create slightly different multiple verification cases by changing keys and values as specified in update_key_value. if keep_base_case is set to True, the base_case is added to the first element in the returned list.
- Args:
base_case: Dict. base verification input information. update_key_value: Dict. the same format as the base_case arg, but the updating fields consist of a list of values to be populated with. keep_base_case: (optional) bool. whether to keep the base case in returned list of verification cases. Default to False.
- Returns:
List, A list of Dict, each dict is a generated case from the base case.
- load_verification_cases_from_json(json_case_path: str | None = None) List[str] | None [source]
Add verification cases from specified json file into self.case_suite. Cases that have already been loaded are ignored.
- Args:
json_case_path: str, path to the json file containing fully defined verification cases.
- Returns:
List, unique ids of verification cases loaded in self.case_suite
- save_case_suite_to_json(json_path: str | None = None, case_ids: List = []) None [source]
Save verification cases to a dedicated file. If the case_ids argument is empty, all the cases in self.case_suite is saved. If case_ids includes specific cases’ hash, only the hashes in the list are saved.
- Args:
json_path: str. path to the json file to save the cases. case_ids: (optional) List. Unique ids of verification cases to save. By default, save all cases in self.case_suite. Default to an empty list.
- static save_verification_cases_to_json(json_path: str | None = None, cases: list | None = None) None [source]
Save verification cases to a dedicated file. The cases list consists of verification case dicts.
- Args:
json_path: str. json file path to save the cases. cases: List. List of complete verification cases Dictionary to save.
- validate()[source]
Validate all verification cases in self.case_suite with validation logic in VerificationCase.validate_verification_case_structure()
- static validate_verification_case_structure(case: Dict | None = None, verbose: bool = False) bool [source]
Validate verification case structure (e.g., check whether run_simulation, simulation_IO, etc. exist or not). Check if required key / values pairs exist in the case. check if datatype of values are appropriate, e.g. file path is str.
- Args:
case: dict. case information that will be validated. verbose: bool. whether to output verbose information. Default to False.
- Returns:
Bool, indicating whether the case structure is valid or not.
verification_library.py
Verification Library API
- class api.verification_library.VerificationLibrary(lib_path: str | None = None)[source]
Bases:
object
Instantiate a verification library class object and load specified library items as self.lib_items.
- Args:
lib_path (str, optional): path to the verification library file or folder. If the path ends with *.json, then library items defined in the json file are loaded. If the path points to a directory, then library items in all jsons in this directory and its subdirectories are loaded. Library item need to have unique name defined in the json files and python files. Defaults to None.
- get_applicable_library_items_by_datapoints(datapoints: List[str] = []) Dict [source]
Based on provided datapoints lists, identify potentially applicable library items from all loaded items. Use this function with caution as it 1) requires aligned data points naming across all library items; 2) does not check the topological relationships between datapoints.
- Args:
datapoints: list of str datapoints names.
- Returns:
Dict with keys being the library item names and values being the required datapoints for the corresponding keys.
- get_library_item(item_name: str) Dict [source]
Get the json definition and meta information of a specific library item.
- Args:
item_name (str): Verification item name to get.
- Returns:
- Dict: Library item information with four specific keys:
library_item_name: unique str name of the library item.
library_json: library item json definition in the library json file.
library_json_path: path of the library json file that contains this library item.
library_python_path: path of the python file that contains the python implementation of this library item.
- get_library_items(items: List[str] = []) List | None [source]
Get the json definition and meta information of a list of specific library items.
- Args:
items: list of str, default []. Library items to get. By default, get all library items loaded at instantiation.
- Returns:
- list of Dict with four specific keys:
library_item_name: unique str name of the library item.
library_json: library item json definition in the library json file.
library_json_path: path of the library json file that contains this library item.
library_python_path: path of the python file that contains the python implementation of this library item.
- get_required_datapoints_by_library_items(datapoints: List[str] = []) Dict | None [source]
Summarize datapoints that need to be used to support specified library items. Use this function with caution as it 1) requires aligned data points naming across all library items; 2) does not check the topological relationships between datapoints.
- Args:
items: list of str, default []. Library items to summarize datapoints from. By default, summarize all library items loaded at instantiation.
- Returns:
- Dict with keys being the datapoint name and values being a sub Dict with the following keys:
number_of_items_using_this_datapoint: int, number of library items that use this datapoint.
library_items_list: List, of library item names that use this datapoint.
- validate_library(items: List[str] | None = None) Dict [source]
Check the validity of library items definition. This validity check includes checking the completeness of json specification (against library json schema) and Python verification class definition (against library class interface) and the match between the json and python implementation.
- Args:
items: list of str, default []. Library items to validate. items must be filled with valid verification item(s). If not, an error occurs.
- Returns:
Dict that contains validity information of library items.
verification.py
Verification API
- class api.verification.Verification(verifications: VerificationCase | None = None)[source]
Bases:
object
- configure(output_path: str | None = None, time_series_csv_export_name: str | None = None, lib_items_path: str | None = None, lib_classes_py_file: str | None = None, plot_option: str | None = None, fig_size: tuple = (6.4, 4.8), num_threads: int = 1, preprocessed_data: DataFrame | None = None) None [source]
Configure verification environment.
- Args:
output_path (str): Verification results output path. time_series_csv_export_name (str, optional): CSV file name for saving a complete data csv file with verification result flags. Defaults to None, which will not save any time series data archives. lib_items_path (str, optional): User provided verification item json path (include name of the file with extension). lib_classes_py_file (str, optional): User provided verification item python classes file. plot_option (str, optional): Type of plots to include. It should either be all-compact, all-expand, day-compact, or day-expand. It can also be None, which will plot all types. Default to None. fig_size (tuple, optional): Tuple of integers (length, height) describing the size of the figure to plot. Defaults to (6.4, 4.8). num_threads (int, optional): Number of threads to run verifications in parallel. Defaults to 1. preprocessed_data (pd.DataFrame, optional): Pre-processed data stored in the data frame. Default to None.
workflow.py
Workflow API
- class api.workflow.Choice(state_dict, payloads)[source]
Bases:
object
The Choice state that check conditions to decide next step. A typical use case of execute a Choice state would be: next_state = Choice(state_dict, self.payloads).check_choices()
- class api.workflow.MethodCall(state_dict, payloads)[source]
Bases:
object
The MethodCall State class. This class also covers the Embedded MethodCall state type. A typical use case of execute a MethodCall state would be: self.payloads = MethodCall(state_dict, self.payloads).run().get_payloads()
- class api.workflow.Workflow(workflow: dict | str | None = None)[source]
Bases:
object
Instantiate a Workflow class object and load specified workflow as a dict in self.workflow
- Args:
workflow (Union[str, dict], optional): str path to the workflow definition json file or dict of the actual workflow definition. Defaults to None.
- static create_workflow_engine(workflow: str | dict) None | WorkflowEngine [source]
Instantiate a WorkflowEngine object with specified workflow definition.
- Args:
workflow (Union[str, dict]): str path to the workflow definition json file or dict of the actual workflow definition. Defaults to None.
- Returns:
Union[None, WorkflowEngine]: Instantiated WorkflowEngine object if provided workflow is valid; None otherwise.
- static list_existing_workflows(workflow_dir: str | None = None) dict | None [source]
List existing workflows (defined as json files) under a specific directory path.
- Args:
workflow_dir (str, optional): path to the directory containing workflow definitions (including sub directories). By default, point to the path of the example folder. @JXL TODO example folder to be specified
- Returns:
- Union[dict, None]: dict with keys being workflow names and values being a Dict with the following keys:
workflow_json_path: path to the file of the workflow
workflow: Dict of the workflow, loaded from the workflow json definition
- load_workflow(workflow: str | dict) None [source]
Load workflow definition from a json file or dict to self.workflow.
- Args:
workflow (Union[str, dict]): str path to the workflow definition json file or dict of the actual workflow definition.
- run_workflow(verbose: bool = False) bool [source]
Execute the workflow defined in self.workflow
- Args:
verbose (bool, optional): bool. Wether to output detailed information. Defaults to False.
- Returns:
bool: Whether the run is successful or not.
- save(json_path: str | None = None) None [source]
Save the workflow as a json file.
- Args:
json_path (str, optional): path to the file to be saved. Defaults to None.
- validate(verbose=False) dict [source]
Validate self.workflow
- Args:
verbose (bool, optional): Verbose output for validate. Defaults to False.
- Returns:
- dict: dict with the following keys:
workflow_validity: bool flag of the validity of the workflow definition
detail: detailed info about the validity check.
- static validate_workflow_definition(workflow: str | dict, verbose=False) dict [source]
Validate a workflow definition.
- Args:
workflow (Union[str, dict]): If str, this is assumed to be the path to the workflow definition json file; If dict, this is assumed to be loaded from the workflow json definition. verbose (bool, optional): Verbose output for validate. Defaults to False.
- Returns:
- dict: dict with the following keys:
workflow_validity: bool flag of the validity of the workflow definition
detail: detailed info about the validity check.
- class api.workflow.WorkflowEngine(workflow, run_workflow_now=False)[source]
Bases:
object
- import_package() None [source]
Import third party packages based on the “imports” element values of the workflow json. E.g.: { … “imports”: [“numpy as np”,”pandas as pd”,”datetime”], … }
- load_workflow_json(workflow_path: str) None [source]
Load workflow from a json workflow definition.
- Args:
workflow_path (str): path to the workflow json file.
- run_state(state_name: str) None | str [source]
Run a specific states by state_name. This is not a external facing method and is only supposed to be called by run_workflow.
- Args:
state_name (str): name of the state to execute.
- Returns:
Union[None, str]: name of the next state to run or None if there is no next state.
- run_workflow(verbose=True, max_states: int = 1000) None [source]
Workflow runner with a maximum steps allowed setting.
- Args:
max_states (int, optional): Maximum number of states to run allowed. Defaults to 1000.