程序師世界是廣大編程愛好者互助、分享、學習的平台,程序師世界有你更精彩!
首頁
編程語言
C語言|JAVA編程
Python編程
網頁編程
ASP編程|PHP編程
JSP編程
數據庫知識
MYSQL數據庫|SqlServer數據庫
Oracle數據庫|DB2數據庫
您现在的位置: 程式師世界 >> 編程語言 >  >> 更多編程語言 >> Python

High memory usage using Python multiprocessing

編輯:Python
2022 The global C++ And system software technology conference | 3 month 11-12 Japan · Shanghai Click to learn more 》>>>

problem :

I have seen a couple of posts on memory usage using Python Multiprocessing module. I've read several articles about using Python Memory usage of multiprocessing modules .However the questions don't seem to answer the problem I have here. However , These questions do not seem to answer the questions I have encountered here .I am posting my analysis with the hope that some one can help me. I post my analysis , I hope someone can help me .

Issue problem

I am using multiprocessing to perform tasks in parallel and I noticed that the memory consumption by the worker processes grow indefinitely. I use multiprocessing to execute tasks in parallel , I noticed that the memory consumption of the worker process increased infinitely .I have a small standalone example that should replicate what I notice. I have a small independent example , It should copy what I noticed .

import multiprocessing as mpimport timedef calculate(num): l = [num*num for num in range(num)] s = sum(l) del l # delete lists as an option return sif __name__ == "__main__": pool = mp.Pool(processes=2) time.sleep(5) print "launching calculation" num_tasks = 1000 tasks = [pool.apply_async(calculate,(i,)) for i in range(num_tasks)] for f in tasks: print f.get(5) print "calculation finished" time.sleep(10) print "closing pool" pool.close() print "closed pool" print "joining pool" pool.join() print "joined pool" time.sleep(5)

System System

I am running Windows and I use the task manager to monitor the memory usage. I'm running Windows And use task manager to monitor memory usage .I am running Python 2.7.6. I'm running Python 2.7.6.

Observation Observe

I have summarized the memory consumption by the 2 worker processes below. I summarize the following 2 Memory consumption of worker processes .

+---------------+----------------------+----------------------+| num_tasks | memory with del | memory without del || | proc_1 | proc_2 | proc_1 | proc_2 |+---------------+----------------------+----------------------+| 1000 | 4884 | 4694 | 4892 | 4952 || 5000 | 5588 | 5596 | 6140 | 6268 || 10000 | 6528 | 6580 | 6640 | 6644 |+---------------+----------------------+----------------------+

In the table above, I tried to change the number of tasks and observe the memory consumed at the end of all calculation and before join -ing the pool . In the above table , I try to change the number of tasks and observe the end of all calculations and joinpool Memory consumed before .The 'del' and 'without del' options are whether I un-comment or comment the del l line inside the calculate(num) function respectively.'del' and 'without del' The options are whether I uncomment or comment calculate(num) In function del l That's ok .Before calculation, the memory consumption is around 4400. Before calculating , Memory consumption in 4400 about .

  1. It looks like manually clearing out the lists results in lower memory usage for the worker processes. It seems that manually clearing the list will reduce the memory usage of the worker process .I thought the garbage collector would have taken care of this. I think the garbage collector will deal with this problem .Is there a way to force garbage collection? Is there any way to force garbage collection ?
  2. It is puzzling that with increase in number of tasks, the memory usage keeps growing in both cases. What's puzzling is that , As the number of tasks increases , In both cases, the memory usage is increasing .Is there a way to limit the memory usage? Is there any way to limit memory usage ?

I have a process that is based on this example, and is meant to run long term. I have a process based on this example , Designed for long-term operation .I observe that this worker processes are hogging up lots of memory(~4GB) after an overnight run. I observed that this worker process took up a lot of memory after running overnight (~4GB).Doing a join to release memory is not an option and I am trying to figure out a way without join -ing. Conduct join To free memory is not an option , I'm trying to find a way not to use join Methods .

This seems a little mysterious. It seems a bit mysterious .Has anyone encountered something similar? Has anyone ever experienced something like this ?How can I fix this issue? How can I solve this problem ?


Solution :

Reference resources : https://stackoom.com/en/question/1S9JP

  1. 上一篇文章:
  2. 下一篇文章:
Copyright © 程式師世界 All Rights Reserved