I'm hoping to find a way to optimise the following situation. I have a large contour plot created with imshow of matplotlib. I then want to use this contour plot to create a large number of png images, where each image is a small section of the contour image by changing the x and y limits and the aspect ratio.
So no plot data is changing in the loop, only the axis limits and the aspect ratio are changing between each png image.
The following MWE creates 70 png images in a "figs" folder demonstrating the simplified idea. About 80% of the runtime is taken up by fig.savefig('figs/'+filename)
.
I've looked into the following without coming up with an improvement:
- An alternative to
matplotlib
with a focus on speed -- I've struggled to find any examples/documentation of contour/surface plots with similar requirements - Multiprocessing -- Similar questions I've seen here appear to require
fig = plt.figure()
andax.imshow
to be called within the loop, since fig and ax can't be pickled. In my case this will be more expensive than any speed gains achieved by implementing multiprocessing.
I'd appreciate any insight or suggestions you might have.
import numpy as np
import matplotlib as mpl
mpl.use('agg')
import matplotlib.pyplot as plt
import time, os
def make_plot(x, y, fix, ax):
aspect = np.random.random(1)+y/2.0-x
xrand = np.random.random(2)*x
xlim = [min(xrand), max(xrand)]
yrand = np.random.random(2)*y
ylim = [min(yrand), max(yrand)]
filename = '{:d}_{:d}.png'.format(x,y)
ax.set_aspect(abs(aspect[0]))
ax.set_xlim(xlim)
ax.set_ylim(ylim)
fig.savefig('figs/'+filename)
if not os.path.isdir('figs'):
os.makedirs('figs')
data = np.random.rand(25, 25)
fig = plt.figure()
ax = fig.add_axes([0., 0., 1., 1.])
# in the real case, imshow is an expensive calculation which can't be put inside the loop
ax.imshow(data, interpolation='nearest')
tstart = time.clock()
for i in range(1, 8):
for j in range(3, 13):
make_plot(i, j, fig, ax)
print('took {:.2f} seconds'.format(time.clock()-tstart))
See Question&Answers more detail:os