plot_hyperparameters ==================== .. py:module:: plot_hyperparameters .. autoapi-nested-parse:: Hyperparameter analysis plotting module for ML results analysis. Focuses on visualizing the impact of hyperparameters on model performance. Classes ------- .. autoapisummary:: plot_hyperparameters.HyperparameterAnalysisPlotter Module Contents --------------- .. py:class:: HyperparameterAnalysisPlotter(data: pandas.DataFrame) Analyzes and visualizes the impact of hyperparameters on model performance. This class extracts hyperparameter settings from model string representations in the results data, allowing for detailed analysis of how different hyperparameters affect a given performance metric. .. py:attribute:: data .. py:attribute:: clean_data .. py:method:: get_available_algorithms() Gets a list of available, parsable algorithms from the data. :returns: A sorted list of unique algorithm names. :rtype: List[str] .. py:method:: plot_performance_by_hyperparameter(algorithm_name: str, hyperparameters: List[str], metric: str = 'auc', figsize: Optional[Tuple[int, int]] = None) Plots performance against a list of hyperparameters in a grid. This function provides a visual analysis of how individual parameter values affect the model's metric score. It creates a grid of subplots, where each subplot visualizes the relationship between a specific hyperparameter and the performance metric, automatically detecting whether to use a scatter plot (for continuous) or a box plot (for categorical/discrete). :param algorithm_name: The name of the algorithm to analyze (e.g., 'RandomForestClassifier'). :type algorithm_name: str :param hyperparameters: A list of hyperparameter names to plot. :type hyperparameters: List[str] :param metric: The performance metric for the y-axis. Defaults to 'auc'. :type metric: str, optional :param figsize: The overall figure size. If None, a default is calculated. Defaults to None. :type figsize: Optional[Tuple[int, int]], optional .. py:method:: plot_hyperparameter_importance(algorithm_name: str, metric: str = 'auc', top_n_percent: int = 20, figsize: Optional[Tuple[int, int]] = None) Plots hyperparameter distributions for top models vs. all models. This method provides insight into which hyperparameter values are more prevalent in high-performing models compared to the overall distribution of values explored during the search. :param algorithm_name: The name of the algorithm to analyze. :type algorithm_name: str :param metric: The metric used to define "top" models. Defaults to 'auc'. :type metric: str, optional :param top_n_percent: The percentage of top models to compare against. Defaults to 20. :type top_n_percent: int, optional :param figsize: The figure size for the plot. Defaults to None. :type figsize: Optional[Tuple[int, int]], optional .. py:method:: plot_hyperparameter_correlations(algorithm_name: str, metric: str = 'auc', method: str = 'pearson', figsize: Optional[Tuple[int, int]] = None, show_correlation_stats: bool = True) Plots correlation between continuous hyperparameters and a performance metric. This method creates scatter plots to visualize the relationship between each continuous hyperparameter and the target metric, including a regression line and correlation statistics. :param algorithm_name: The name of the algorithm to analyze. :type algorithm_name: str :param metric: The performance metric. Defaults to 'auc'. :type metric: str, optional :param method: The correlation method ('pearson' or 'spearman'). Defaults to 'pearson'. :type method: str, optional :param figsize: The figure size. Defaults to None. :type figsize: Optional[Tuple[int, int]], optional :param show_correlation_stats: Whether to print a summary table of correlations. Defaults to True. :type show_correlation_stats: bool, optional .. py:method:: plot_top_correlations(algorithm_name: str, metric: str = 'auc', method: str = 'pearson', top_n: int = 5, figsize: Tuple[int, int] = (15, 10)) Plots only the top N most correlated hyperparameters with the metric. :param algorithm_name: The name of the algorithm to analyze. :type algorithm_name: str :param metric: The performance metric. Defaults to 'auc'. :type metric: str, optional :param method: The correlation method ('pearson' or 'spearman'). Defaults to 'pearson'. :type method: str, optional :param top_n: The number of top correlated hyperparameters to plot. Defaults to 5. :type top_n: int, optional :param figsize: The figure size. Defaults to (15, 10). :type figsize: Tuple[int, int], optional