Blob Blame History Raw
From 558dc7d84b46a1924e06808caa8b113238961fd3 Mon Sep 17 00:00:00 2001
From: Carl-Daniel Hailfinger <c-d.hailfinger.devel.2006@gmx.net>
Date: Mon, 26 Sep 2016 17:07:41 +0200
Subject: [PATCH 08/11] Improve memory management of nbdkit python plugin
 example

Hi,

the nbdkit python plugin example has suboptimal memory management:
- it creates the disk image as a string on init
- it casts the string to bytearray on every read
- it copies the string before and the string after the written region,
then reassembles those pieces together with the written region to a new
disk image string

This is not a problem as long as the image is small, but in my tests
with a 5 GB sized image nbdkit already used 15 GB RAM directly after
startup, and even more (20-25 GB) on the first write.

This changes the code to use bytearray everywhere and use the proper
methods to change bytearray objects directly. With the patch applied,
nbdkit with a 5 GB image will still only use 5 GB RAM even during heavy
read/write activity.

Regards,
Carl-Daniel
---
 plugins/python/example.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/plugins/python/example.py b/plugins/python/example.py
index aa099eb..184896e 100644
--- a/plugins/python/example.py
+++ b/plugins/python/example.py
@@ -29,7 +29,7 @@
 # reconnect to the same server you should see the same disk.  You
 # could also put this into the handle, so there would be a fresh disk
 # per handle.
-disk = "\0" * (1024*1024);
+disk = bytearray(1024 * 1024)
 
 # This just prints the extra command line parameters, but real plugins
 # should parse them and reject any unknown parameters.
@@ -50,9 +50,9 @@ def get_size(h):
 
 def pread(h, count, offset):
     global disk
-    return bytearray (disk[offset:offset+count])
+    return disk[offset:offset+count]
 
 def pwrite(h, buf, offset):
     global disk
     end = offset + len (buf)
-    disk = disk[:offset] + buf + disk[end:]
+    disk[offset:end] = buf
-- 
2.7.4