Lecture Video will be available on the course page

Topics Covered:

Downloading & processing Kaggle datasets

Training a logistic regression model

Model evaluation, prediction & persistence

Notebooks used in this lesson:

Please provide your valuable feedback on this link to help us improve the course experience.

Join the Jovian Discord Server to interact with the course team, share resources and attend the study hours Jovian Discord Server

Asking/Answering Questions

Reply to this thread to ask questions. Before asking, scroll through the thread and check if your question (or a similar one) is already present. If yes, just like it. We will give priority to the questions with the most likes. The rest will be answered by our mentors or the community. If you see a question you know the answer to, please post your answer as a reply to that question. Letβs help each other learn!

@aakashns Sir , I built the model on breast cancer logistic regression , the model accuracy came out to be 91% , but when I was testing this modelβs efficiency on random_guess model and all_no model , the accuracy came out to be 54% and 36% respectively.Is it expected or something is wrong?
Also , I was testing the model by providing the new single input in the end for prediction and I was scaling it the way I did for training and test set , but the new single input was not getting scaled.I am using the same scaler object.I wonder why.

I donβt remember exactly, so could be wrong but I think the random_guess model and all_no model were models that predicted a random guess and no respectively for any input value.

They are basically for reference, like suppose the model we trained has an accuracy of 50%, and then you see that a random_guess model(a model that predicts with a random guess and requires no training/modeling, etc.) has an accuracy of 54%
Then this would imply that our 50% accurate model is not a very successful model and requires changes.

In case I remember wrong and random_guess and all_no are test cases, then you would have to check your model for overfitting.