Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

Suppose I have 3 docker files, each being independently useful.

A.dockerfile
B.dockerfile
C.dockerfile
...
N.dockerfile

And I make them all in the same context:

docker build .. -f A.dockerfile
docker build .. -f B.dockerfile 
docker build .. -f C.dockerfile
...
docker build .. -f N.dockerfile

There is caching at the layer level, yes. But now I have leveraged that caching in a useful way, and suddenly I am back to a saw-tooth of performance: for each of the N dockerfiles, many of which may complete in approximately O(1), I must send the entire context .. (approx. 3 Gb in my case) to the docker daemon every time.

The result is a saw tooth performance, but all the chugging isn't really doing anything -- functionally, I am knitting together different cached layers and cherry picking files from the context... so the vast majority of processing is just byte churn.


Is there any way to continuate the docker context cache some how?


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
3.8k views
Welcome To Ask or Share your Answers For Others

1 Answer

等待大神解答

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...