Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
111 views4 pages

Grid vs. Random Search Explained

The document discusses hyperparameter tuning in machine learning, focusing on two techniques: Grid Search and Random Search. Grid Search exhaustively evaluates all combinations of predefined hyperparameters, while Random Search samples a broader range more quickly, making it suitable for models with many parameters. The choice between the two methods depends on the number of hyperparameters and the need for thoroughness versus efficiency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views4 pages

Grid vs. Random Search Explained

The document discusses hyperparameter tuning in machine learning, focusing on two techniques: Grid Search and Random Search. Grid Search exhaustively evaluates all combinations of predefined hyperparameters, while Random Search samples a broader range more quickly, making it suitable for models with many parameters. The choice between the two methods depends on the number of hyperparameters and the need for thoroughness versus efficiency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Grid Search vs.

Random Search

LONDON INNOVATION ACADEMY 1


1. What is Hyperparameter Tuning?
In machine learning, models often have parameters that need to be set before training begins, called hyperparameters (like learning rate, number of
trees, depth of a decision tree, etc.). The process of choosing the best values for these hyperparameters is called hyperparameter tuning.

Grid Search vs. Random Search


Grid Search
• Definition: Grid Search is an exhaustive search technique that evaluates every possible combination of hyperparameters from a predefined set.
• How It Works:
1. You define a grid, or "list," of hyperparameter values that you want to test.
2. Grid Search will then try every combination of these hyperparameters, train the model for each, and measure its performance.
3. It picks the combination that results in the best performance (usually on a validation set).
• Example: Suppose we are tuning a Decision Tree and want to test different values for:
o max_depth : [3, 5, 7]
o min_samples_split : [2, 4]

Grid Search would test all 3 x 2 = 6 combinations:

LONDON INNOVATION ACADEMY 2


• from [Link] import RandomForestClassifier
• from sklearn.model_selection import GridSearchCV

• model = RandomForestClassifier()

• param_grid = {
• 'max_depth': [1, 4, 8],
• 'min_samples_split': [3, 5, 9]
• }

• grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=5, scoring='accuracy')


• grid_search.fit(X, y)

• print("Best Parameters: ", grid_search.best_params_)


• print("Best Accuracy: ", grid_search.best_score_)

• Pros:
oEfficient with large ranges: Can explore a larger search space quickly without testing every combination.
o Good for complex models: For models with many hyperparameters, Random Search is often faster and may still find a good

combination.
• Cons:
o Not exhaustive: Might miss the optimal combination since it’s not testing every possibility.

o Random: Its success depends partly on luck, as it relies on the randomly selected combinations.

LONDON INNOVATION ACADEMY 3


Comparing Grid Search and Random Search

When to Use Which?


• Use Grid Search if you have a small, manageable number of hyperparameters to tune and need a thorough search.

• Use Random Search when you have a lot of hyperparameters or possible values, or if training time is a concern, as it can explore a broader

range of values more quickly.

LONDON INNOVATION ACADEMY 4

You might also like