Detailed explanation of python obtaining word vector of txt file

  • 2021-07-10 20:10:41
  • OfStack

When reading the Chinese word vector in https://github.com/Embedding/Chinese-Word-Vectors, an txt file with more than 3G was selected. Before, word2vec was used for word vector, so import the model directly and then indexword can be used.

Because this is a large txt file, I tried DataFrame, np. loadtxt, etc., but failed. The main problems encountered are:

How to read a complete large file without running out of memory memery error and so on Save the read file as npy file Find the corresponding vector according to the word

Solution:

The code you tried to use:


 Code 1:
try:
lines=np.loadtxt(filepath)
catch:
 I feel that this piece can't be written. 
  print(ValueError)
 But in this case, it won't continue to cycle to read the above txt What about it 

 Code 2 : 
lines=[]
with open(filepath) as f:
  for line in f:
    lines.append(line)
np.save(filepath,lines)

 Code 3
 
def readEmbedFile(embedFile):
#   embedId = {}
#   input = open(embedFile,'r',encoding="utf-8")
#   lines = []
#   a=0
#   for line in input:
#     lines.append(line)
#     a=a+1
#     print(a)
#   nwords = len(lines) - 1
#   splits = lines[1].strip().split(' ') #  Because the first 1 Rows are statistics, so use the 2 Row 
#   dim = len(splits) - 1
#   embeddings=[]
#   # embeddings = [[0 for col in range(dim)] for row in range(nwords)]
#   b=0
#   for lineId in range(len(lines)):
#     b=b+1
#     print(b)
#     splits = lines[lineId].split(' ')
#     if len(splits) > 2:
#       # embedId Assignment 
#       embedId[splits[0]] = lineId
#       # embeddings Assignment 
#       emb = [float(splits[i]) for i in range(1, 300)]
#       embeddings.append(emb)
#   return embedId, embeddings

 Code 4 : 
def load_txt(filename):
  lines=[]
  vec_dict={}
  with open(filename,r) as f:
    for line in f:
    list=line.strip()
    lines.append(line)
  for i, line in emuate(lines):
    if i=0:
      continue
    line=line.split(" ")
    wordID=line[0]
    wordvec=[float line[i] for i in range(1,300)]
  vec_dict[wordId]=np.array(wordvec)  
 
  return vec_dict

The main reasons for the specific memory shortage are:

I really don't have enough memory in my virtual machine. Later, after using the host of laboratory 32G, I can get idvec, but if I can't get vector, the error I reported is memory error.
Another reason is that the word vector needs to be converted into float, and the memory occupied by str in python > float type, as shown in the code:


print("str",sys.getsizeof(""))
print("float",sys.getsizeof(1.1))
print("int",sys.getsizeof(1))
print("list",sys.getsizeof([]))
print("tuple",sys.getsizeof(()))
print("dic",sys.getsizeof([]))

str 49
float 24
int 28
list 64
tuple 48
dic 64

On my computer, 64-bit operating system, 64-bit python, the memory size is sorted as follows:

dic=list > str > tuple > int > float

When reading, you can use np. load (). item to restore the original dictionary, mainly referring to the following documents:

Then, through the dictionary operation of python, the word vector of each word can be traversed, dic [vocab]

Experience:

There are still 5 ~ 6 checkpoints to completely solve the problems of the project, but calm down and make a breakthrough step by step!


Related articles: