You want to continue to save data by scratching it on the web at regular intervals.
However, if you keep collecting data for several days, you may run out of memory if you have it as an object on Python.
In this case, is there a way to avoid running out of memory by accumulating the scraped data on memory and immediately adding it to the file?
memory crawling python
All resources are finite, but memory is expensive in storage and low in price. Ram is also a volatile structure that disappears at the end of the process.
Of course, you need to store it in a separate storage space. Data that is actually stored in hdfs, etc., and analyzed in hdfs is moved to rdbms.
© 2024 OneMinuteCode. All rights reserved.