What's the best way to tune hyperparameters collectively?

In the exercise, I try to find the best parameter for each feature individually.

What if there is a combination which is actually better than the individual value?

For eg: When I try to find the best value for ‘max_leaf_nodes’, I use the test_param_and_plot function and give it different values of max_leaf_nodes and let’s say I find that the best values is 180. But if I use n_estimators as 120 and then run the same experiment, what if I get a better value at max_leaf_nodes=130?

I tried doing this for few combinations but didn’t find any variation. Is there a way to find the optimum value for all parameters collectively?

For now on, we have to manually try all the permutation and combinations to get the best result. Maybe later we’ll see implementation of either grid search cv or random search cv in the course.

Here, we have to select a range/set of parameters and let the compute figure out the best possible parameters out of it.

If you’re curious, you can check out : Grid Search | Random Search

Thanks. This looks interesting

I tried both but the results got worse and it also took a long time to finish (especially grid search), maybe I made a mistake in the implementation part.

Well its not a necessary that the score will increase after hyper parameter tuning. Maybe your chosen ML model is at its end limit. You can look for different model like XGBoost or else.

Generally I try to explore and visualize every parameters individually, so I’ll know which range of parameters to pick from. Then I choose a collective ranges of different parameters, try it with random search, and then with grid search. If I’m still not getting some good results, then its time to move to some other model

1 Like

The same has happened with me, the model tends to overfit data marginally.