Hi! There is a xlm file with a size of ~300 Mb. It is necessary to get some info and lay down next to the script. Wrote this script:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from xml.dom import minidom
xmldoc = minidom.parse('otchet_tovara.xml')
itemlistcod = xmldoc.getElementsByTagName('cod')
itemlistnaimenov = xmldoc.getElementsByTagName('naimenov')
traine_cod = []
traine_naimenov = []
traine_all = []
for s in itemlistcod:
traine_cod.append(s.attributes['cod'].value)
for s in itemlistnaimenov:
traine_naimenov.append(s.attributes['naimenov'].value)
i = 0
f = open('nelikvid.txt', 'w')
while i<len(traine_address): s1="traine_cod[i]" s2="traine_naimenov[i]" s3=s1 + ' - traine_all.append(s3) f.write(s3, '\n') i="i" 1< code></len(traine_address):>
br><br>
It all works fine when file processing is a small volume. 1-2 Mb. But when you try to process a large file, everything gets a STAKE! The script constantly refers to HDD and the system hangs tightly! To check the usage of RAM is not possible(((<br><br>
Think all the crash occurs when you run this:<br><br><pre><code>for s in itemlistcod:
traine_cod.append(s.attributes['cod'].value)</code></pre><br><br>
What would you recommend? How to fix?