The project I'm working on will generate code for a large number of classes - hundreds to thousands is expected. It is not known at generation time how many of these classes will actually be accessed.
The generated classes could (1) all live in a single assembly (or possibly a handful of assemblies), which would be loaded when toe consuming process starts.
...or (2) I could generate a single assembly for each class, much like Java compiles every class to a single *.class
binary file, and then come up with a mechanism to load the assemblies on demand.
The question: which case would yield better (memory and time) performance?
My gut feeling is that for case (1) the load time and used memory are directly proportional to the number of classes that make up the single monolithic assembly. OTOH, case (2) comes with its complications.
If you know any resources pertaining to the internals of loading assemblies, especially what code gets called (if any!?) and what memory is allocated (book-keeping for the newly loaded assembly), please share them.
See Question&Answers more detail:os