When measuring the classification accuracy on the training set, can I take a sample of it or should I use it entirely?

July 26, 2017 / Ask Slater

 
Usually inference is the fast part so you measure on the whole dataset once per epoch. The problem with measuring on a sample is that for most useful things (like measuring test/train divergence) this injects so much noise into the signal that it’s much less useful. It’s contingent more on the overall size of your dataset than a ratio. If you’ve got 100m examples, then taking a class-balanced random sample of 10m is pretty reasonable. If you’ve got 10k examples then taking a sample of 1000 is probably going to mess everything up.

Above all, think about why you’re measuring accuracy on your training set. In most cases when I see someone doing this, they don’t have a great reason for doing so beyond wanting higher accuracy numbers.

View original question on Quora >

Follow Slater on Quora >>

Don't Miss a Post!

Subscribe to indico's monthly newsletter to receive the latest blog posts and AI industry news.