It makes no difference in the final application size.
When something is read from a Python module, even if it is only a module variable, it is executed (like any .py file) - the functions, classes and variables defined in the module are created and are available from there.
The only difference to make from X import Y
is that in the module where you made the import will only be created the variable Y
which will refer to the same variable Y
that is in the module X
. In the same way, if you do import X
, only the variable is created X
, referring to the whole module.
If X has thousands of functions and variables, they are all in memory and available for immediate use - other import
likewise will never read the module X
of the disc again:
after the first import
, module X is available in sys.modules['X']
- and others import ... from X
will pick up the references right from there.
On the other hand, given the size of the memory of conventional Pcs and even that available on virtual servers today, hardly code will make the program heavier - Python bytecode, which is what is read when we import a module is about the same size as the source code - maybe 30% larger - and, we always have to keep in mind that the whole Christian Bible for example, when counted as text, occupies only 3MB: that is, a program that had as much code as the bible has text would occupy 4MB of memory only to be imported - against typical PC’s with 8000MB of memory, or even virtual servers with 512MB.
The boot time of a module can weigh a bit - for something this big can reach a few seconds if the file . pyc doesn’t exist yet. But if there is a library with thousands of items that can weigh in the final executable, it is up to the author of the library to separate it into subpackages, which have to be explicitly imported.
It makes no difference, it’s just a reference you’re using. As if each import is a variable, roughly speaking
– Jefferson Quesado