close

[Solved] MemoryError: unable to allocate array with shape and data type float32 while using word2vec in python

Hello Guys, How are you all? Hope You all Are Fine. Today I get the following error MemoryError: unable to allocate array with shape and data type float32 while using word2vec in python in python. So Here I am Explain to you all the possible solutions here.

Without wasting your time, Let’s start This Article to Solve This Error.

How MemoryError: unable to allocate array with shape and data type float32 while using word2vec in python Error Occurs?

Today I get the following error MemoryError: unable to allocate array with shape and data type float32 while using word2vec in python in python.

How To Solve MemoryError: unable to allocate array with shape and data type float32 while using word2vec in python Error ?

  1. How To Solve MemoryError: unable to allocate array with shape and data type float32 while using word2vec in python Error ?

    To Solve MemoryError: unable to allocate array with shape and data type float32 while using word2vec in python Error Reducing the size will also save memory. A setting of size=300 is popular for word-vectors, and would reduce the memory requirements by a quarter.

  2. MemoryError: unable to allocate array with shape and data type float32 while using word2vec in python

    To Solve MemoryError: unable to allocate array with shape and data type float32 while using word2vec in python Error Reducing the size will also save memory. A setting of size=300 is popular for word-vectors, and would reduce the memory requirements by a quarter.

Solution 1

Ideally, you should paste the text of your error into your question, rather than a screenshot. However, I see the two key lines:

<TIMESTAMP> : INFO : estimated required memory for 2372206 words and 400 dimensions: 8777162200 bytes
...
MemoryError: unable to allocate array with shape (2372206, 400) and data type float32

After making one pass over your corpus, the model has learned how many unique words will survive, which reports how large of a model must be allocated: one taking about 8777162200 bytes (about 8.8GB). But, when trying to allocate the required vector array, you’re getting a MemoryError, which indicates not enough computer addressable-memory (RAM) is available.

You can either:

  1. run where there’s more memory, perhaps by adding RAM to your existing system; or
  2. reduce the amount of memory required, chiefly by reducing either the number of unique word-vectors you’d like to train, or their dimensional size.

You could reduce the number of words by increasing the default min_count=5 parameter to something like min_count=10 or min_count=20 or min_count=50. (You probably don’t need over 2 million word-vectors – many interesting results are possible with just a vocabulary of a few tens-of-thousands of words.)

You could also set a max_final_vocab value, to specify an exact number of unique words to keep. For example, max_final_vocab=500000 would keep just the 500000 most-frequent words, ignoring the rest.

Reducing the size will also save memory. A setting of size=300 is popular for word-vectors, and would reduce the memory requirements by a quarter.

Together, using size=300, max_final_vocab=500000 should trim the required memory to under 2GB.

Solution 2

I encountered the same problem while working on pandas dataframe, i solved it by converting float64 types to unint8 ( of course for those not necessarily needs to be float64, you can try float32 instead of 64)

data[‘label’] = data[‘label’].astype(np.uint8)

if you encounter conversion errors

data[‘label’] = data[‘label’].astype(np.uint8,errors=’ignore’)

Summery

It’s all About this issue. Hope all solution helped you a lot. Comment below Your thoughts and your queries. Also, Comment below which solution worked for you? Thank You.

Also, Read