Reducing the size of the Kaggle dataset

I will be downloading a large dataset, 2 million rows, from Kaggle.
I wish to filter this csv file to reduce the number of rows to 200k to make it quicker to process in my notebook, which I run on my local system.
Can I download my filtered csv, along with my notebook, to Jovian so that the notebook will pick up my filtered csv when you evaluate my submission?

You can do the project on kaggle, there won’t be any need to download anything (and your system won’t choke up with such large amount of data).

Also, you can commit directly from kaggle to jovian, so even if you would filter the data, you can work there and do whatever you have to.

Not sure if I understand the intentions here (you probably meant “upload”, but still no idea what you want to achieve).

Sorry Sebastian, after reading my message again, it does seem a bit confusing!
What I want to do is upload a csv file to Jovian to partner my notebook.
I have downloaded a large dataset from Kaggle, and using a python script, I have reduced the number rows in the dataset to a manageable size. So without my csv file accompanying my notebook, my notebook will not process.
So can I upload my csv file as well as my notebook to Jovian?
Alternatively, I can include my python script in my notebook so that the full Kaggle dataset can be filtered in situ.

You can just commit() the notebook, and all the cells and outputs will be uploaded. There’s no need to have the dataset uploaded as well.

If you aim though, to upload it (to make it possible for anyone seeing it to run it immediately without downloading anything) you could use files argument of the commit() function.
This argument accepts a list of files which you wish to upload. I have no idea though if this will work with really big files (there may be limits imposed by the server).

Thanks Sebastian, I’ll try to upload my csv via the commit method.
Is the commit method documented? There be other hidden gems that we could use.
Found the docco. I was just being lazy!