By default there is this in Python - Being a dynamic language the approach is completely different:
All in Python is an object derived from the class object
- and the language does not check the type of any argument in any function call (but the quantity, and name of the arguments, yes).
So in a simple example:
def soma(a, b):
return a + b
This function will be called to any values of a and b. If they are numbers, they will be summed. If they are text (string), the string will be concatenated (since for strings, the "+" operator is overloaded to perform concatenation), and the method works the same way.
If you pass objects that do not implement the sum operation with the "+" operator (to support the operator, the object class has to have a method with the special name __add__
), you will get a TypeError
at runtime.
That is, in Python the language does not type check, and gives you flexibility to perform the same operations and call functions with several different object types, provided they "work" the way the called function expects (in this case the soma
one of the two has to have a method __add__
or __radd__
which recognizes the other object passed).
Nothing prevents you from doing manual checking within the method or function with if
using isnstance
or issubclass
to determine whether past objects have the necessary behaviors implemented - but this is rarely done. Anyway, the recommendation is to test the behavior, not the class - for example, let’s assume a function that receives an object that may or may not contain text, but if it does, try to convert it to uppercase with its own method .upper()
- the shape could be:
def generate_slug(obj):
if hasattr(obj, "upper"):
obj = obj.upper()
...
To the detriment of:
def generate_slug(obj):
if isinstance(obj, str):
obj = obj.upper()
...
With this you solve everything you could with polymorphism by varying the types of the past arguments. Now there is another really very useful feature in Python that are the optional parameters:
def soma3(a, b, c=0):
return a + b + c
See in this statement - with the =
a standard value is specified for c
: that is to say soma3
can be called with 2 or 3 arguments. If only two are used, c
retain the value 0
. If 3 values are passed, the third is used to c
.
There are also syntaxes that allow an indeterminate number of parameters in sequence (for which no matter the name, only the order), and an indeterminate amount of parameters with name -
def soma_n(*args):
acc = 0
for arg in args:
acc += arg
return acc
soma_n(5,6,7,8,9,10)
I suggest seeking a supplementary reading on named parameters, and with more examples of *args and **kwargs use - are very important topics but not directly related to the issue of polymorphism.
So far we have it: a single function can receive arguments of varying types, and a varied number of parameters - in contrast to statically typed languages implementing polymorphism, where a function declaration is required for each possible variant of typing/number of parameters. (Try to imagine the soma3
above can work with different combinations of integers, float numbers, decimal numbers, fraction, and each of these with 2 or 3 parameters - the number of necessary implementations grows geometrically. Okay, numbers can descend from an abstract class "Number" - but you can get the idea).
Now yes, there are times when it is more elegant to define the functions separately for different types of parameters, than to separate internally by if
. For this, the standard library has a "singledispatch" feature that automatically selects a function body depending on the type of the first parameter. In interactive mode you can write:
In [15]: import functools
In [16]: @functools.singledispatch
...: def soma(a, b):
...: pass
...:
In [17]: @soma.register(int)
...: def s(a, b):
...: return f"{a + b} integer"
...:
In [18]: @soma.register(str)
...: def s(a, b):
...: return f"{a + b} string"
...:
In [20]: soma(3,4)
Out[20]: '7 integer'
...
Note that this is in the standard library, but the dynamic nature of Python combines the ability to write higher-order functions - which in this case are used as decorators, prefixing functions with the syntax of @
- allows a project to create its own methods to make a Dispatch for other methods/functions automatically - and this is not complex when one understands the language well.
Last but not least, it’s worth mentioning that while these dynamic Python features are pretty cool for low-coupling code, and quick feature writing, in larger projects, with large teams, may get in the way - so for some years Python has been evolving a type annotation specification, other than force arguments of a certain type at runtime, but allows helper tools to check the typing used in the calls in a check step (commit Hooks, run tests, etc...). This standardizes a syntax introduced in Python 3.0 that allows specifying the type of a parameter using ":" in the declaration of a function:
In [28]: from numbers import Number
In [29]: def soma3(a: Number, b: Number, c: Number=0):
...: return a + b + c
...:
It makes explicit to static check tools (some can be used transparently by an IDE, for example), that the idea is that this function only takes numeric types. At runtime however, the language nay would no longer call this function if invoked with strings, for example. (But yes, it would be possible to create a tool that error automatically in this case - but with this: (1) everything would be slower and (2) you lose the advantages of being using a dynamic language - you could use Cython, for example that can really take advantage optimizing the execution when declaring some of the types of parameters)