Webinar replay: How carriers are leveraging large language models (LLMs) and automation to drive better decisions
Watch Now
  Everest Group IDP
             PEAK Matrix® 2022  
Indico Named as Major Contender and Star Performer in Everest Group's PEAK Matrix® for Intelligent Document Processing (IDP)
Access the Report

BLOG

Why do machine learning algorithms require large amounts of computer space for all kinds of datasets?

August 29, 2018 | Ask Slater

Back to Blog

Depends a bit on which model type you’re talking about, but for the purposes of illustration I’m going to assume that we’re talking about something like a Logistic Regression Model on top of a tf-idf vector.
In this situation, as well as many others, model size is more or less constant with dataset size. In this case, you need to store about one parameter per term in your tf-idf vector. Depending on exactly what’s in your vector (bigrams, trigrams, etc…) you can assume that you’ve got about 10k entries in your tf-idf vector, largely independent of dataset size. Whether your dataset is 1,000 examples or 100,000 examples, your tf-idf vector is going to be about the same size, so you need to store the same amount of data.

Now, in this case the model is quite small. Assuming that you’re storing these parameters in eight bytes (which is probably overkill) you’re looking at a model that’s hundreds of KB to low MB, but could still be pretty large relative to the size of your dataset.

Now, that’s a Logistic Regression model, which, generally speaking, is very parameter-efficient. Deep learning models tend to sit on the opposite side of the spectrum. If we look at a modern NLP problem (The Stanford Natural Language Processing Group) we’ll see that many of the high-performing solutions have tens of millions of parameters. Using the same basic assumption above of 2 bytes per parameter, we’ve got a model that’s going to be in the hundreds of MB.

That’s a pretty normal range for a modern deep learning model. Something in the realm of hundreds of MB to low GB is probably ~80% of modern models unless particular effort has been taken to reduce the number of parameters and thus the model size (note: this is a very active area of research typically referred to as distillation).

The important thing to note though is that generally the size of your model is independent of the size of your dataset. As learning progresses you get better parameter values and the exact contents of your model change, but as you are not changing the model itself (again, this is a generality, not a hard and fast rule) your model size isn’t varying and stays wherever you started.

Now, “large amount of computer space” is relative. Typically the couple of GB that a hefty deep learning model takes shouldn’t be significant in terms of storage. A dataset that is a couple of GB would be considered quite small in most contexts, and any modern computer is able to hold dozens to hundreds of these models, which is far beyond what is needed in most cases.

The real issue comes to run-time memory consumption. During training you’re managing much more than just the static model footprint on disk and have to keep a lot of intermediary states in memory which can easily case your memory footprint to increase several-fold, leading to runtime memory footprints that can easily be upward of 4GB depending on the architectures you’re working with. This is a problem because many GPUs only have 4GB of onboard memory. This is a problem that you’ll run into pretty frequently, especially in the language domain, and investing in a GPU with 8+GB of memory is highly-advised. especially if you’re using Tensorflow which is very greedy when it comes to memory footprint.

View original question on Quora >

Follow Slater on Quora >>

[addtoany]

Increase intake capacity. Drive top line revenue growth.

[addtoany]

Unstructured Unlocked podcast

March 27, 2024 | E43

Unstructured Unlocked episode 43 with Sunil Rao, Chief Executive Officer at Tribble

podcast episode artwork
March 13, 2024 | E42

Unstructured Unlocked episode 42 with Arthur Borden, VP of Digital Business Systems & Architecture for Everest and Alex Taylor, Global Head of Emerging Technology for QBE Ventures

podcast episode artwork
February 28, 2024 | E41

Unstructured Unlocked episode 41 with Charles Morris, Chief Data Scientist for Financial Services at Microsoft

podcast episode artwork

Get started with Indico

Schedule
1-1 Demo

Resources

Blog

Gain insights from experts in automation, data, machine learning, and digital transformation.

Unstructured Unlocked

Enterprise leaders discuss how to unlock value from unstructured data.

YouTube Channel

Check out our YouTube channel to see clips from our podcast and more.
Subscribe to our blog

Get our best content on intelligent automation sent to your inbox weekly!