Functions, Wrappers, and Decorators

a Tutorial about Python 3.x Syntax

In Python, all elements are objects, such that functions are sometimes referred to as "first-class functions" or "first-class objects", indicating that they have a kind of primitive nature with respect to the interpreter (similar to numerical values).

Practically speaking, this just means that, as objects, functions are instances of a base Function class that has a __call__() method which overrides the parentheses operators. This means that functions can be managed by their alias (their "name") and can be selectively called.

Since a function can be selectively called, and the functions are objects themselves, then that means that the alias can be passed to other functions which can manipulate or call the functions through their base-class methods.

In Python, these functions that call other functions are known as "wrappers" or "decorators", with the @ symbol being the short-hand decorator syntax supported by the interpreter. The short-hand notation saves a few lines of code, but it is extremely esoteric and hard to read when you're unfamiliar with the code.

This page gives some examples about how to read and write wrappers, decorators, and different kinds of functions. Feel free to contact us by email or on GitHub if you notice any issues, typos, or accidental misinformation. This page gives no guarantees, but is free to use for personal or professional education.

image/svg+xml image/svg+xml

Table of Contents

Click on Section Title to Expand Contents after Navigating with the Table of Contents Link

Python treats everything as an object, but the interpreter has several short-hand keywords.

This even extends to functions, in Python, such that you can essentially think of functions as being objects that are "callable" -- referring to supporting the () operator via the special __call__() method.

In other programming languages you'll often find that functions and classes are treated very distinctly. You'll also find that many compiled programming languages have different types of functions -- free functions, global functions, static-functions, scoped functions, class functions (methods), and so on.

With Python, there is only one "kind" of function. This is unlike C++, because in Python the module-methods and class-methods share identical syntax and usability. There's no concerns about static functions or static methods, in Python, by default. These designs exist, but they're not forced, and are usually only needed for highly technical optimisations.

In Python, it's only the scope of a function that defines its accessibility. This means that there's also no such thing as "public", "private", or "protected" functions, in Python.

All scoped elements are publicly accessible through their (outer scope) container.

Granted, Python functions are objects, so we could create instances of the , as a short-hand (but the most common) syntax for creating a function, we use the def keyword and the colon (:) punctuation to establish a new scope (the function-body). All functions have at least one return statement, and if there is no explicitly written return statement then the interpreter will return a None whenever the function reaches the end of its scope.

Here's a simple function that prints "hello" and returns nothing:

def say_hello():
   print("hello");
   return (None);
# fed

As mentioned, there's an implicit NoneType return, so here's an equivalent definition:

def say_hello():
  print("hello");
# fed

Maybe instead of "hello" we want to print whatever string is given to the function, so let's change the first example to now take an argument:

def say_something(msg_str):
   print(msg_str);
   return (None);
# fed

say_something("i'm giving up on you");
say_something("anywhere, i would've followed you");

Functions allow for two kinds of arguments:

  • Unnamed ("positional")
  • Named ("keyword")

The main usage restriction is:

Unnamed arguments can not follow named arguments.

Here's a function using several unnamed ("positional") arguments in the caller:

def triple_add(x, y, z,):
   return (x + y + z);
# fed

w = triple_add(1, 2, 3,);

Here's that same function using named ("keyword") arguments in the caller:

def triple_add(x, y, z,):
   return (x + y + z);
# fed

w = triple_add(
   x= 1,
   y= 2,
   z= 3,
);

Beyond readability, there are a few added benefits to using meaningful keywords for arguments in function definitions and calls.

One thing to note is that we can use the keyword syntax in the definition to establish a default argument, which simultaneously makes that argument optional (meaning that the default will be used unless the caller provides a new value):

def triple_add(x, y, z=10,):
   return (x + y + z);
# fed

w = triple_add(
   x= 1,
   y= 2,
);

print("W:", w);
# W: 13

Another thing we can do is that since we're using named arguments, we don't need to put them in the same order as the definition when we call the function, because there's no ambiguity of which is which since we've explicitly used their names:

def triple_add(x, y, z=10,):
   return (x + y + z);
# fed

w = triple_add(
   z= 20,
   y= 1,
   x= 2,
);

print("W:", w);
# W: 23

While this may not be entirely recommended all the time, we can also mix unnamed ("positional") and named ("keyword") arguments in the caller:

def triple_add(x, y, z=10,):
   return (x + y + z);
# fed

w = triple_add(1, 2, z= 20,);

print("W:", w);
# W: 23

Usually, the above kind of mixed syntax is used when most of the arguments are meant to be unnamed but the final argument is some kind of flag or modifier that has a default and is being explicitly overwritten with the function call.

Again, the only restriction is that we can't put named arguments before unnamed arguments. This syntax is invalid and will throw a SyntaxError (positional argument follows keyword argument will be the details).

def triple_add(x, y, z=10,):
   return (x + y + z);
# fed

w = triple_add(x= 1, 2, z= 20,); # SyntaxError

Note that functions are generally scoped to a module, while "methods" are the name used for functions scoped to a class.

In a class, there's an implicit first argument that is unnamed and is colloquially called self by convention. Classes are often written as:

class Calculator(object):
   def add(self, x, y,):
      return (x + y);
   # fed
   def subtract(self, x, y,):
      return (x - y);
   # fed
# ssalc

Again, self is just a convention and while it is generally recommended to use it for readability, you can name it whatever you like. Here's the same example, completely valid, but using this instead of self:

class Calculator(object):
   def add(this, x, y,):
      return (x + y);
   # fed
   def subtract(this, x, y,):
      return (x - y);
   # fed
# ssalc

The self (implicit) argument is used to reference the instance of the class that called the method. In that respect, it represents the current "state" of the object that called the method. As such, it's used to coordinate shared values across functions. This can be data that's common to the methods or something that is meant to be used in every method.

Given our above example, here's how you would create a Calculator instance and call the subtract method:

class Calculator(object):
   def add(self, x, y,):
      return (x + y);
   # fed
   def subtract(self, x, y,):
      return (x - y);
   # fed
# ssalc

ti89 = Calculator();
z = ti89.subtract(3, 5,);

print(z);
# -2

If you create a module that has a function in it, you can import that module and then call that function using the . (dot) operator just like in the class example above.

Modules and Classes, in this respect, are like "Namespaces" used in C++.

In compiled languages like C++, a function has a set number of arguments/parameters, known as being N-adic (or p-adic). A function of one variable would be monadic, for two variables its dyadic, and so on. This is known as the adicity of a function.

In Python, there is no concept of Function Overloading, due to the way the interpreter was designed. In C++, for example, you can reuse a function's name and declare then define it with different sets of arguments. As long as the arguments are of distinguishable types, you can define many different "overloads" of a function, all with the same name. This has the benefit of allowing you to use the same function call but having it operate differently depending on the data-type of the arguments.

Here's a simple C++ example of an overloaded function:

class PrettyPrinter
{
  public:
    void show_number(float const & x);
    void show_number(int const & x);
    void show_number(int const & x, int const & zeropads);
};

In the above example, we've declared (without defining) three methods of a PrettyPrinter class, but they all have the same name. However, since they all have different numbers/types of arguments, these are all overloaded methods for the show_number() method.

We could imagine that the bodies of these functions are all printing-out the x value differently depending on the data-type. Instead of trying to do type-matching based on coercion of a void-pointer within a single function body, this is a simple approach that lets the compiler choose which method to call. Without getting into the merits (or not) of this example, the basic takeaway should be that if we want to make a generic method in C++ (ignoring templates) we can instead create multiple overloaded versions.

In Python, we can't do the above because all elements are objects, and all objects are referenced by aliases. In essence, every "variable name" in Python is a pointer to an object (in C++ terminology). This means that any redeclaration of an alias would override all previous instances of the alias. So if we define three different versions of show_number() in Python, we only get the third one by the time the interpreter has finished reading the code.

The other obvious thing should be that since Python is implicitly typed, there's no need (or ability, depending on your Python version) to declare the type(s) of the argument(s) for a function. All function arguments are "passed by object-reference", which means that a "pointer" to the underlying memory instance tied to the alias is passed to the function. If the function uses object-methods to modify the aliased object, the original object will be modified out-of-scope. If non-object-methods are applied to the function argument alias, then the original object will be copied locally into the scope of the function and be modified in-place, leaving the original object untouched. This allows for passing by pointer or passing by reference (by copy), though it's much more "tacit"/implicit than in C++.

With all that said, it means that in Python, there are way fewer reasons to need to create overloaded functions in the first place. But, the remaining issue is how to deal with differing numbers of arguments and defaulted arguments. So let's go through those one-by-one.

First, we can actually call Python functions in one of 3 ways:

  • Unnamed Arguments ("positional arguments")
  • Named Arguments ("keyword arguments")
  • Unnamed arguments followed by Named Arguments

To handle all three situations with a single alias, Python functions -- technically the __call__() method of all Python function-objects -- all support the following generically-innate syntax:

def example_fnc(*unnamed_lst, **named_dct):
  # ...
  return (None);
# fed

The syntax we used above may seem unfamiliar, but the point was to highlight how the aliases themselves are inconsequential. You'll often see the same example function with this syntax:

  def example_fnc(*args, **kw_args):
    # ...
    return (None);
  # fed

Both are exactly the same, and it's the asterisks that are doing all the work. A single asterisk collapses an iterable by one "level". Two asterisks will collapse an iterable by two-levels, essentially a dictionary (dict) object where there is a list of key-value pairs.

Note that for class-member functions, the dot-operator passes the current object as the first argument, so you can either use the above syntax and just be careful, or you can use the following syntax to make things a little simpler for yourself:

class Example(object):
  name = "Example";
  def __init__(self,):
    return (None);
  # fed
  def example_fnc(self, *unnamed_lst, **named_dct):
    print(self.name);
    # ... other code goes here ...
    return (None);
  # fed
# ssalc

Inside the body of a generic module-function you have access to only two arguments, the unnamed_lst and the named_dct. Inside the body of a generic class-function you can have access to the self alias for the instance, as well as the unnamed_lst and named_dct. Each of them can contain as many values as were called with the function.

Here's a simple example that you can use to print these collapsed containers to see what happens with the different calls below:

def generic_test(*unnamed_lst, **named_dct):
  print("unnamed_lst:", unnamed_lst);
  print("named_dct:", named_dct);
  return (None);
# fed

generic_test();
generic_test(1, "hi", x=7);
generic_test(x=1, y="hi", z=7);
generic_test(1, "hi", 7);

generic_test(x=1, "hi", y=7); # SYNTAX ERROR
  # (Unnamed argument after a named argument.)

Here's an example of the output, so you can verify on your own:

  >>> generic_test(1, "hi", x=7);
  unnamed_lst: (1, 'hi')
  named_dct: {'x': 7}

Now, a terrible way to use this functionality would be to never use named arguments, and just let anyone who calls your functions use whatever arguments they want, and then your function tries to suss-out what was given and why. That would be a nightmare for you and for anyone (including yourself) trying to use your function(s) later on. Beyond being just arguably unreadable, it would also mean that you're writing low-level C/Assembly-style code where you're recreating the concept of a function by trying to create a generic routine that ad hoc determines its purpose. Possibly interesting as a dynamic programming problem, but it's gonna be a nightmare.

The real benefit to this syntax is that under certain circumstances, it allows for your code to accept arguments into a function when you won't know what those arguments are, because maybe you don't care.

Imagine writing the code for a "worker", a worker being someone who uses tools. Let's say the worker instance is somehow handed a tool and all tool instances have a .use() method. Now, what if all your workers are beholden to a Technical Manual, which describes the arguments to give to the .use() method for whichever tool is currently needed for the current step in the build process. Seems like in that situation, you could have a worker method like .use_tool() that's defined as such:

  class Worker(object):

    toolkit = None;
    manual = None;
    cur_step = None;
    cur_tool = None;

    def __init__(self, toolkit,):
      self.toolkit = toolkit;
      return (None);
    # fed

    def set_manual(self, new_manual,):
      self.manual = new_manual;
      return (None);
    # fed

    def goto_next_step(self,):
      self.cur_step = self.manual.get_next_step();
      self.cur_tool = self.manual.get_tool(self.cur_step);
      return (None);
    # fed

    def use_tool(self, *args, **kwargs):
      self.cur_tool.use(**kwargs);
      return;
    # fed

    def do_task(self,):
      params_dct = self.manual.get_tool_params(self.cur_step);
      self.use_tool(**params_dct);
      return (None);
    # fed

  # ssalc

In the above, params_dct is a dictionary of key-value pairs, so by using the double-asterisk in a call, we're expanding the dictionary to a tuple of key-value paired arguments. In the use_tool(*args, **kwargs) definition, kwargs will be a dictionary that collapses (contains) any of the key-value paired arguments passed to the function. So then, to call the .use() method with the proper key-value paired arguments, we need to again use the double-asterisk to unpack/unzip the dictionary into the key-value paired arguments.

Yes, we took a dictionary, unzipped it, rezipped it, and then unzipped it again ... but not really. It's all just iterations and it's all being handled by the interpreter as optimized as possible. Arguably in this design the use_tool() method adds an extra step, but it also allows for use of tools handed to the worker instead of just those available through the internal manual. So in the example, we gain generic functionality by adding a couple extra lines that don't really affect the readability that much more.

The real benefit here is how simple/terse the code is but how generically it can be applied. We don't need to know what the arguments of the current tool are, we rely on the current step of the current manual to pass along that information through the .get_tool_params() method. This method could return a list or a dictionary and our method would still work as defined. The onus of coordination then becomes a burden only on the tools and the manuals. The worker is just the go-between.

This reduces redundancy, it tries to mitigate possible mismatches, and it will likely make the code much easier to read and to follow if there's a bug. Worst case scenario is that the tool parameters don't match the functionality of the tools usage method, so then that becomes a developer problem to figure out if they made a mistake in the tool or in the manual. If the worker has no say in the matter, then that eliminates them from the list of possible culprits for the bug.

This also means that tools and manuals can be updated and the worker(s) will always be able to keep up with the latest iterations. The workers themselves won't ever need to be "upgraded" so long as the manual is aligned with the tools.

Arguably, this syntax (while esoteric at first) can lead to a lot of really useful, generic, simple code like the above example. In the next sections you'll also see that this can be expanded further into wrapper-functions and decorators, doing essentially the same job of calling a function from inside another function via pass-through of the alias.

WARNING

Python generically supports the trailing comma syntax, such that all iterables that use commas , can always have an extra comma on the last item.

This is super preferable for avoiding simple hair-pulling mistakes when writing lists or dictionaries or functions.

Trailing commas in situations like this:

def test_func(
    name,
    label,
    value,
    origin,
  ):
  print("name:", name)
  # ...
  return (None);
# fi

You can comment-out any of the arguments and the syntax will always be valid. And you can add any argument anywhere in the list and the syntax will always be valid as long as you always end each line in a comma.

def test_func(
    name,
    label,
    # value,
    origin,
    new_label,
  ):
  print("name:", name)
  # ...
  return (None);
# fi

The other benefit is that with ",".join(list_of_items) you can auto-generate valid syntax if you're writing-out Python scripts for dynamic code generation and such.

Often its just a preference of consistency, but there are benefits as well, and it's an easy habit to adopt to avoid simple, hair-pulling syntax errors. Trying to find an errant or missing comma can often be a nightmare in large code-bases.

HOWEVER, there is one time that you absolutely can not use a trailing comma, and that is in this situation:

def bad_syntax(*args, **kwargs,):
  # ...
  return (None);
# fi

def valid_syntax(*args, **kwargs):
  # ...
  return (None);
# fi

The bad_syntax declaration will fail because you can't possibly have any arguments after the **kwargs collapser for named arguments.

The interpreter will always complain and this is the one situation where you should never put a trailing comma.

Again, arguably, **kwargs functions should be relatively rare and specialized syntax that's fragile to begin with, so great care should be used in editing and documenting any functions that use the generic syntax. Which also means that there shouldn't be a lot of fussing with the arguments list such that you'd get any benefit out of a trailing comma anyways.

Personally, I find it easy enough to remember to never put a trailing comma after **kwargs, and just use trailing-commas everywhere else all the time. It's always saved me more often than it's ever "hurt" me, when developing and maintaining code.

Since Functions are objects in Python, the name of the function is actually the alias for the object.

Given this example:

def add_special(x, y):
  return (x + y);
# fed

The name add_special is a registered Alias within the module ("namespace" / scope) where this function is defined.

This means that we can do things like:

def add_special(x, y):
  return (x + y);
# fed

add_also = add_special;

>>> add_special(3, 5)
8

>>> add_also(3, 5)
8

As shown, we've created a new alias add_also that points to the same object as the original alias (function name) add_special.

The above example is mostly pointless, but something like that comes in handy if we were to create a module called my_maths.py with the following content:

# @file
# @brief My custom Mathematics module
#
# @author Tommy P. Keane
# @email talk@tommypkeane.com

def add_special(x, y):
  return (x + y);
# fed

And in another module we can do the following:

import my_maths;

new_add = my_maths.add_special;

z = new_add(3, 5);

print("z:", z);
# z: 8

By creating the new alias new_add we can simplify the in-module calls to my_maths.add_special, in case the code seems too verbose or the naming is hurting readability.

Arguably a more specific import could've been used like from my_maths import add_special;, but often for extensibility and readability in large codebases, it can be much more helpful to do a module import and then specifically call any methods or classes through the dot-operator.

The argument against such an approach is usually that if add_special gets renamed in the original module, every module that calls it will need a code-change everywhere it's used. By abstracting the call through the new_add alias, you could make it so that only the one line would need to be updated if add_special was renamed or replaced with an equivalent function.

Arguably, you could use this approach of creating new alias to a function when you use a placeholder function or some library that you're not sure of. In case the library doesn't work out, if the call signature is still the same, you could swap it out at that one line and all your code everywhere else would still work because it's all referencing new_add.

That's a bit of an aside, but this all introduces the abstraction that's at the heart of a wrapper function.

Wrapper Functions are functions which take other functions as arguments, and then call the functions that were passed-in.

In this example we create a function that adds two numbers, and then we create a wrapper function that will pre-print the arguments and post-print the result.

def add_special(x, y):
  return (x + y);
# fed


def show_maths(math_func):

  def do_maths(x, y):
    print("x:", x);
    print("y:", y);
    z = math_func(x, y);
    print("z:", z);
    return (z);
  # fed

  return (do_maths);
# fed

show_add_special = show_maths(add_special);

The results of the two different calls would then look like this:

>>> add_special(3, 5);
8

>>> show_add_special(3, 5);
x: 3
y: 5
z: 8

As you can see from above, show_maths returns a function alias that it created internally. This inner function calls the given function that was provided to show_maths.

With this design, the show_maths function is a "wrapper function", since it wraps around the inner do_maths and wraps around whatever function is provided as the math_func argument.

As you can see, the math_func is called with the arguments provided to do_maths, where do_maths is the return value of the show_maths function.

As such, this gives us the ability to do the following short-hand:

def add_special(x, y):
  return (x + y);
# fed


def show_maths(math_func):

  def do_maths(x, y):
    print("x:", x);
    print("y:", y);
    z = math_func(x, y);
    print("z:", z);
    return (z);
  # fed

  return (do_maths);
# fed


z = show_maths(add_special)(3, 5,);


>>> print(z);
8

In the above example, we skip the extra alias of show_add_special and just let the interpreter order-of-operations play-out so that we call the return of show_maths with the arguments 3 and 5.

Lastly, let's combine the wrapper function with the previous concept of a generic function.

Let's say that we don't know what the arguments are for a function, or we want something super generic. Instead of giving named/positional arguments for the inner-scoped function, we can use the generic * and ** syntax to make something that we can use ubiquitously:

def show_maths(math_func):

  def do_maths(*args, **kwargs):
    print("*args:", *args);
    print("**kwargs:", **kwargs);
    results = math_func(*args, **kwargs);
    print("results:", results);
    return (results);
  # fed

  return (do_maths);
# fed

The above is now able to be used generically with any math_func that takes any number of named (*args) or unnamed (**kwargs) arguments.

This obviously lacks some detail in the printouts, since it has to be generic, but this is just a simple example. In other implementations you could do all kinds of things using this approach, like opening and closing files or sockets around message-formatting functions, for example.

The previous section introduced Wrapper Functions in Python, which can actually be implemented in a slightly less-verbose manner using the Decorator syntax and the @ special-character.

In the Wrapper Functions section we showed the following example:

def add_special(x, y,):
  return (x + y);
# fed

def show_maths(math_func,):
  def do_maths(x, y,):
    print("x:", x,);
    print("y:", y,);
    z = math_func(x, y,);
    print("z:", z,);
    return (z);
  # fed
  return (do_maths);
# fed
 
showy_add = show_maths(add_special,);

z = showy_add(3, 5,);

In the above example, show_maths can now be used to "wrap" any extant dyadic-function by passing its alias, and will return a new dyadic-function alias that will print the arguments and then print the result, before and after running the function.

That basic wrapper syntax becomes extremely useful in just those kinds of situations where you want to do something before or after a function, repeatedly, and you want to centralize your code. Avoiding repetitious code and allowing the computer to repeat things for you is one of the tenets of good software development.

The only problem is that now you need to do extra alias wrangling. We had a perfectly good function name with add_special, but now we need to use showy_add to get the wrapper benefits. But what if we didn't want to have to deal with a new alias, and we just wanted to keep our original alias?

Not only does that mitigate the need to come up with all kinds of new function names -- and avoids the trouble of unimaginative people creating really gross function names -- that would also allow us to "secretly" modify extant functions with a wrapper that does not require any changes to the code that calls the function.

It's not really a "secret", but it allows us to basically inject functionality without having to change a lot of code. As long as things are readable or sensible, this is great. This could be used to wrap functions with logging features, as shown in the example, and not have to worry about going around changing the syntax of every call to the functions.

So how do we do this?

One "cheating" way to do this is to just cross-our-fingers and do some alias swapping:

def add_special(x, y):
  return (x + y);
# fed

def show_maths(math_func,):
  def do_maths(x, y,):
    print("x:", x,);
    print("y:", y,);
    z = math_func(x, y,);
    print("z:", z,);
    return (z);
  # fed
  return (do_maths);
# fed

add_special = show_maths(add_special,);

z = add_special(3, 5,);

The sketchy thing about the above code is that we're reassigning the alias we originally created, so if we change that original alias we have to keep this new one up to date -- which honestly isn't a real burden, because you have to change everywhere that you called it, too.

A bigger concern is that if we get a little too clever, we risk dropping all references to the original function without properly deep-copying it, thus making the downstream alias invalid. This won't happen in a simple example like above, but if things get too complicated, you're at the mercy of the interpreter and making sure it knows what you meant to do.

Now, personally, I kinda like the above example because it's really blunt in terms of readability. Going from top to bottom, there's basically no question about what's happening -- it's all there. There's probably no real risk to using the above syntax, and worst-case you're a little extra verbose.

The Decorator syntax is almost identical, except we now remove the need to do any alias shuffling, and instead we rely on the @ character to be handled by the interpreter:

def show_maths(math_func,):
  def do_maths(x, y,):
    print("x:", x,);
    print("y:", y,);
    z = math_func(x, y,);
    print("z:", z,);
    return (z);
  # fed
  return (do_maths);
# fed

@show_maths
def add_special(x, y,):
  return (x + y);
# fed

z = add_special(3, 5,);

For safety's sake (and readability's sake), we've rearranged the original example and defined the wrapper-function show_maths before our target function add_special. This is because we're going to use the show_maths decorator-syntax by putting @show_maths on the line directly above the def line.

Technically, as long as everything is at the same scope in the same module, we don't need to worry about the ordering, but I find it nicer to do it this way.

As you can see, we're now 1 line shorter, as we don't need to re-alias add_special because the @ symbol is doing it for us, already.

The above examples are identical, so technically it's just a matter of preference / convention.

However, the @ decorator syntax starts to come in handy when we start looking at generic methods, or wanting to pass arguments to the decorator (the wrapper-function).

Let's create a new wrapper-function that checks if either of the 2 arguments to a dyadic maths-function are evenly (read: integer) divisible by 10, and we'll start by using the syntax we know.

def check_for_tens(math_func,):
  def do_maths(x, y,):
    if ((x % 10) == 0):
        print("x is evenly divisible by 10");
    # fi
    if ((y % 10) == 0):
        print("y is evenly divisible by 10");
    # fi
    z = math_func(x, y);
    return (z);
  # fed
  return (do_maths);
# fed

Now, let's say that we want to genercise ("make generic") this function so that we can check for divisibility by whatever value is provided to the wrapper function.

Immediately, this poses a bit of an issue with the syntax as we've introduced it. The wrapper takes in a function and returns a function, and this is mandatory for using the @ decorator syntax. So, how do we pass in a new variable d to be the divisor we want to test?

Here, the precedence order of operators for the interpreter comes into play. You can find precedence details in the official documentation -- Python Operator Precedence -- but we'll summarise the important bit: parentheses () are called before the decorator @.

That means that if we can wrap our wrapper in a function that takes our d argument, and return the wrapper function before using the @, then we can do this!

So let's rewrite check_for_tens as check_for_divisor:

def check_for_divisor(d,):
    def check_for_divisor_wrapper(math_func,):
      def do_maths(x, y,):
        if ((x % d) == 0):
            print("x is evenly divisible by {0:d}".format(d,));
        # fi
        if ((y % d) == 0):
            print("y is evenly divisible by {0:d}".format(d,));
        # fi
        z = math_func(x, y,);
        return (z);
      # fed
      return (do_maths);
    # fed
    return (check_for_divisor_wrapper);
# fed

We can now use our new double-wrapper function as a decorator that takes an argument:

@check_for_divisor(10,)
def add_special(x, y,):
  return (x + y);
# fed

>>> w = add_special(3, 25,);
""

>>> z = add_special(50, 35,);
"x is evenly divisible by 10"

This may all seem really obtuse, so let's go over it again.

From the above nested definition, we have 3 functions: check_for_divisor, check_for_divisor_wrapper, and do_maths. By then calling:

@check_for_divisor(10,)
def add_special(x, y,):

We're saying:

  1. Call check_for_divisor with the argument 10, which returns check_for_divisor_wrapper.
  2. Call the return (check_for_divisor_wrapper) function by passing in add_special.
  3. Reassign the add_special alias to the return (do_maths).

So, to be clear, when we call w = add_special(3, 25,) we're actually calling w = do_maths(3, 25,), which has been updated to use d = 10 internally thanks to the @check_for_divisor(10,) call.

Everywhere in our code, though, we'll only refer to add_special. Thus check_for_divisor "decorates" the add_special definition, which, itself, remains accessible in the code. This is why it's a decorator, not a wrapper or a replacement.

Lastly, you may be wondering what the implication is of this alias mess-about that we've done. Mostly, it's fine, but as we just said, we're actually calling w = do_maths(3, 25,), we've just re-aliased it. Fun fact, though, re-aliasing doesn't actually change the __name__ member of a function object. The __name__ is the alias that was used to originally define a function.

So, in this case we'll see that the __name__ of add_special becomes do_maths, once it's decorated:

@check_for_divisor(10,)
def add_special(x, y,):
  return (x + y);
# fed

print(add_special.__name__);
# "do_maths"

Arguably, this is a very good thing. This actually leaves behind a "receipt" indicating that add_special has been wrapped and re-aliased -- a.k.a., "decorated". But, this can pose a new problem. Look at the following situation:

@check_for_divisor(10,)
def add_special(x, y,):
  return (x + y);
# fed

@check_for_divisor(10,)
def sub_special(x, y,):
  return (x - y);
# fed

print(add_special.__name__);
# "do_maths"

print(sub_special.__name__);
# "do_maths"

Ruh-roh, both functions now have the same __name__, because they're both calling the same function (though, it's different instances). But how can we have two instances of the "same" function? We don't. Functions that are defined within a function are known as being locally-scoped, but as soon as we return the alias from the outer function, then that inner-function persists in memory for the lifetime of the interpreter runtime.

Every function call to the wrapper, thus, creates a new "instance" of the same function with the same characteristics. The __name__ may be the same, but the memory addresses are different, indicating that they're different "instances". We can verify that by printing the __repr__ as provided by calling print() on the alias:

print(add_special);
# <function check_for_divisor.<locals>.check_for_divisor_wrapper.<locals>.do_maths at 0x1019b7ee0>

print(sub_special);
# <function check_for_divisor.<locals>.check_for_divisor_wrapper.<locals>.do_maths at 0x1019da040>

Whoa!? See? each do_maths instance is nested within the inner <locals> set of local-memory ("variables") within each of the nested wrapper functions. Ultimately though one is at (in my circumstances) address 0x1019b7ee0 and the other is at 0x1019da040.

Now, since these are different "instances", this kind of poses a problem with the naming situation. It's not actually accurate to rely on the __name__, as is, under these circumstances. Sure, it tells us the alias of the original function name that's being called, but really, that's misleading because different instances could've been constructed with different parameters, and many things could be different. If we were trying to use the __name__ to identify or compare callers, then we'd be in a state of ambiguity.

What would be really nice is if we could preserve the __name__ of the wrapped function. Thankfully the __name__ element is mutable! So let's just reassign it.

So we originally had:

def check_for_divisor(d,):
    def check_for_divisor_wrapper(math_func,):
      def do_maths(x, y,):
        if ((x % d) == 0):
            print("x is evenly divisible by {0:d}".format(d,));
        # fi
        if ((y % d) == 0):
            print("y is evenly divisible by {0:d}".format(d,));
        # fi
        z = math_func(x, y,);
        return (z);
      # fed
      return (do_maths);
    # fed
    return (check_for_divisor_wrapper);
# fed

Now let's add a line to reassign the __name__ before we return the function.

def check_for_divisor(d,):
    def check_for_divisor_wrapper(math_func,):
      def do_maths(x, y,):
        if ((x % d) == 0):
            print("x is evenly divisible by {0:d}".format(d,));
        # fi
        if ((y % d) == 0):
            print("y is evenly divisible by {0:d}".format(d,));
        # fi
        z = math_func(x, y,);
        return (z);
      # fed
      do_maths.__name__ = math_func.__name__;
      return (do_maths);
    # fed
    return (check_for_divisor_wrapper);
# fed

Now what do we see?

print(add_special.__name__);
# "add_special"

print(sub_special.__name__);
# "sub_special"

print(add_special);
# <function check_for_divisor.<locals>.check_for_divisor_wrapper.<locals>.do_maths at 0x1019b7ee0>

print(sub_special);
# <function check_for_divisor.<locals>.check_for_divisor_wrapper.<locals>.do_maths at 0x1019da040>

Success! Our __name__ values changed. However, you can see that the __repr__ values stayed the same. This is desirable and expected, though. We still are actually calling the inner do_maths "instances", and thankfully this gives us our "receipt" to show that. And, we now have differently "named" functions, so we can now reliably use the __name__ as a means of identifying a function -- giving us the best of both situations.

And that's the basics of decorators.

Obviously we can genericise the inner or outer wrappers that take specific arguments, and we could namespace our wrappers by defining them as class methods instead of just being module methods, if we wanted. As class methods, we would get access to the object alias (self, by convention), so there's lots of options there for instancing and containing wrappers that have unique, controllable characteristics.

Before we finish, let's try something a little "wacky" ... let's see if we can make our divisor into an accessible, manipulable element by using a class-based wrapper.

class MathsChecker(object):
    d = None;
    def __init__(self, d):
        self.d = d;
        return (None);
    # fed
    def set_divisor(self, d):
        self.d = d;
        return (None);
    # fed
    def check_for_divisor(self,):
        def check_for_divisor_wrapper(math_func,):
          def do_maths(x, y,):
            if ((x % self.d) == 0):
                print("x is evenly divisible by {0:d}".format(self.d,));
            # fi
            if ((y % self.d) == 0):
                print("y is evenly divisible by {0:d}".format(self.d,));
            # fi
            z = math_func(x, y,);
            return (z);
          # fed
          do_maths.__name__ = math_func.__name__;
          return (do_maths);
        # fed
        return (check_for_divisor_wrapper);
    # fed
# ssalc

Now let's create an instance, and use it to decorate a module method:

maths_checker_obj = MathsChecker(10,);

@maths_checker_obj.check_for_divisor()
def add_special(x, y,):
  return (x + y);
# fed

>>> w = add_special(30, 5);
"x is divisible by 10"

maths_checker_obj.set_divisor(5);

>>> z = add_special(30, 5);
"x is divisible by 5"
"y is divisible by 5"

Fantastico! We now have a dynamically configurable, decorated version of the add_special function. At any time, we can call maths_checker_obj.set_divisor() and this will change the decorations around add_special.

How would this be useful?

Imagine you had a logging class that provided a logging wrapper method that you wanted to establish as a decorator, but you wanted to make the logging-level user-configurable without having to restart the interpreter (the application). This way, you could have a logger_obj.set_logging_level(...) method that lets the logging-level dynamically change while the app still runs.

Readability definitely runs the the risk of being lost in all of this wrapping and re-aliasing, but if you choose meaningful function names and add in docstrings (unlike what I did here) -- do as I say, not as I do -- you should be able to gain a lot of functionality while avoiding a lot of repetition in your codebase.

It's perfectly valid, if not outright encouraged, to create a wrapper if you find yourself doing the same "thing" over and over again before or after a bunch of different functions.

The Python Standard Library (PSL) actually has a few built-in, commonly-available decorators that are predefined without needing any imports.

Arguably, not all of these are generally useful, and you shouldn't just use a decorator for the sake of using it (see the next section). So, you may see code that overuses @private or @property or @classmethod, and you should ask yourself if it really was a good pattern and whether you should continue it or cull it.

That said, you can't make informed decisions without being informed, so let's introduce some common PSL decorators and some used by popular Python packages.

Python Standard Library Decorators

These decorators don't require any imports and can be used in any of your code at any time. You can find the official documentation here in the Built-in Functions page.

@classmethod

The @classmethod decorator modifies a method of a class so that the first "positional" (unnamed) argument is always the Class not the instance of the class, when called.

Conventional class methods look like this:

class Example(object):
  name = "Example";
  def set_name(self, new_name,):
    self.name = new_name;
    return (None);
  # fed
  def show_name(self,):
    print(self.name);
    return (None);
  # fed
  def reset_name(self,):
    self.name = Example.name;
    return (None);
  # fed
# ssalc

As you can see, they all use (by convention) the self alias for their first argument, to indicate that they are accepting the instance of the class when called.

With @classmethod you could add a function print the default name (though this isn't the only way to do it):

class Example(object):
  name = "Example";
  def set_name(self, new_name,):
    self.name = new_name;
    return (None);
  # fed
  def show_name(self,):
    print(self.name);
    return (None);
  # fed
  def reset_name(self,):
    self.name = Example.name;
    return (None);
  # fed
  @classmethod
  def show_default_name(cls,):
    print(cls.name);
    return (None);
  # fed
# ssalc

If you're familiar with C++, this is similar to a static method, such that it has access to the class instead of the instance -- as if it were shared "statically" between all instances. Arguably, the interpreter would only need to register one "instance" of the show_default_name method in memory and just let all instances of the Example class use it, because it's always the same definition no matter what the state of the Example instance is.

Because of that, you may see arguments for using @classmethod if you want/need to save memory. But be very careful that you don't over-zealously try to pre-emptively optimise by using @classmethod, because it's creating a fundamentally different kind of function. This is discussed more in the Advice on Decorators section, on this page.

Without getting too far into it right now, just consider: in what way you could implement reset_name as an @classmethod decorated function?

Here's one way to do it, that you might agree is pretty gross to look at:

class Example(object):
  name = "Example";
  def set_name(self, new_name,):
    self.name = new_name;
    return (None);
  # fed
  def show_name(self,):
    print(self.name);
    return (None);
  # fed
  @classmethod
  def reset_name(cls, obj,):
    obj.name = cls.name;
    return (None);
  # fed
  @classmethod
  def show_default_name(cls,):
    print(cls.name);
    return (None);
  # fed
# ssalc

a_obj = Example();
a_obj.show_name();          # "Example"
a_obj.set_name("A",);
a_obj.show_name();          # "A"
a_obj.reset_name(a_obj,);
a_obj.show_name();          # "Example"

Using @classmethod required us to write a_obj.reset_name(a_obj,); instead of just writing a_obj.reset_name();. This also made it so that we could do a_obj.reset_name(b_obj,);, which seems like a strange division of labor. True, this made each Example instance smaller in its memory footprint, but we run the risk of resetting the name of a different object than we intended.

Arguably, it leads to risky, hard-to-read code. So, it has its benefits, but since this is Python and not C++, we should probably be writing code differently, so that it's most functional and most readable, understanding that we're at the mercy of an interpreter, not a compiler, so there's only so much optimization to be gained.

Lastly, an important note from the official documentation:

"If a[n @classmethod function] is called for a derived class, the derived class object is passed as the implied first argument."

@staticmethod

The @classmethod decorator doesn't really provide the same functionality as a static method that you may be familiar with from C++ or Java. If you want a truly "static" method, the Python Standard Library provides the @staticmethod decorator.

Everything said about the @classmethod is still relevant, the only difference is that the @staticmethod does not provide any implicit "positional" (unnamed) argument when it's called.

Here's our @classmethod example:

class Example(object):
  name = "Example";
  def set_name(self, new_name,):
    self.name = new_name;
    return (None);
  # fed
  def show_name(self,):
    print(self.name);
    return (None);
  # fed
  def reset_name(self,):
    self.name = Example.name;
    return (None);
  # fed
  @classmethod
  def show_default_name(cls,):
    print(cls.name);
    return (None);
  # fed
# ssalc

Here's how that would look if we changed show_default_name to be decorated as an @staticmethod function:

class Example(object):
  name = "Example";
  def set_name(self, new_name,):
    self.name = new_name;
    return (None);
  # fed
  def show_name(self,):
    print(self.name);
    return (None);
  # fed
  def reset_name(self,):
    self.name = Example.name;
    return (None);
  # fed
  @staticmethod
  def show_default_name():
    print(Example.name);
    return (None);
  # fed
# ssalc

Again, this is basically the same as @classmethod, it's just that there's no implicit first argument.

As advice, we'd suggest you carefully consider if you really need an @staticmethod decorated class-method, or if your readability and extensibility would be helped by having a top-level module method instead. An @staticmethod can't manipulate an object or class unless they're passed to it explicitly, or are closure-scoped to it, so their functionality is arguably limited.

A benefit, though, would be if you want to provide a kind of namespace-scoped function that's available through access to a class or its instances. In this case, it'd make more sense (if possible) to use an @staticmethod on a base class, so it's available through all derived classes ("subclasses") and instances.

@property

There are actually two versions/approaches here. From this documentation, you'll see that there is a built-in class called property whose __init__ method takes four optional arguments: fget (function), fset (function), fdel (function), and fdoc (string).

So, not really as a classic decorator, but as an internal wrapper class for creating new member-object instances, we can do the following:

class Vector(object):
  val_lst = [];
  def get_values(self,):
    return (self.val_lst);
  # fed
  def set_values(self, new_lst,):
    self.val_lst = new_lst;
    return (None);
  # fed
  def del_values(self,):
    del self.val_lst;
    return (None);
  # fed

  value = property(
    get_values,
    set_values,
    del_values,
    "This is the vector value array.",
  );
# ssalc

First, be sure to notice that value as created by property is a member-element of Vector.

Why not just use val_lst? Well, here's what the above syntax provides you the ability to do. First, you can now access the internal val_lst through the get_values() accessor method by simply calling:

x = Vector();
x_value = x.value;

If you want to update the val_lst by using the setter, you can simply do the following:

x = Vector();
x.value = [1, 2, 3];

And if you wanted to permanently remove the val_lst from your instance, you just need to do:

x = Vector();
del x.value;

But, you might wonder, isn't this all the same as just doing:

x = Vector();
x.val_lst = [1, 2, 3];  # Set
x_value = x.val_lst;    # Get
del x.val_lst;          # Delete

Yup! You're right! It's exactly the same, but through an extra couple layers of abstraction.

So, why would we want that?

Well, if you're familiar with Software Design from C++, you'll know that encapsulation is a wonderful rule-of-thumb to follow to err on the side of safety and conformance for dealing with multiple-access issues and code-base coordination.

Accessing val_lst directly is arguably unsafe, because what if other methods are changing it or using it in ways that are specific to the class? You're basically circumventing the design of the class to reach in directly and change something willy-nilly.

If the class is designed really well and is substantially complex enough, it's likely that there's extra coordination and/or state-management that's happening through the accessor methods that you wouldn't be getting if you avoid using them.

Imagine a Counter class that keeps track of previously set values. Every time you set ("update") the count, its setter method could log the previous count before updating to the new one. This would provide you with a history, that could be great for graphing/plotting or using for statistical analyses. Logging data is hugely important in scientific, engineering, and computing applications, and by not using the designed code as intended, you risk circumventing (if not outright eliminating) that functionality.

So, creating the set_..., get_..., and del_... methods for elements in a class can be a great way to indicate an implicit "contract" of how to interact with instances of the class (the objects).

But, that's 2 or 3 functions (with/out del) for every publically accessible element, which can get pretty verbose pretty fast ... though that should be recognizable from C++.

So, that's often why you'll see lots of classes or derived classes, and lots of nuanced design in well-thought-out C++ codebases to try to make sure that you're not repeating yourself any more than is necessary, because the desirable functionality is already verbose enough as it is.

As the first example showed, the property constructor doesn't really save you anything in code you're writing when creating a class, but it certainly saves a lot of code for all the objects that use the new value element as an indirect accessor to the val_lst element.

But, we can also go one step further and use @property to save some code in the class, as well.

Here's that same example but a little less verbose:

class Vector(object):
  val_lst = [];
  @property
  def value(self,):
    """Access this vector's value array."""
    return (self.val_lst);
  # fed
  @value.setter
  def value(self, new_lst,):
    """Update this vector's value array."""
    self.val_lst = new_lst;
    return (None);
  # fed
  @value.deleter
  def value(self,):
    """Delete this vector's value array."""
    del self.val_lst;
    return (None);
  # fed
# ssalc

We've made the code a little longer by adding documentation, but there's a few shortcuts you should notice. We've eliminated the need to call the property constructor, we don't need to create (perhaps convoluted) new names for each method, and we now have standardized .setter and .deleter syntax that may arguably be easier to follow than fset, fget, and fdel, for anyone new to this code.

It's honestly just a matter of preference in terms of which approach you'd use for defining class property instances, but there's a lot of readability benefit (in this case) to the @<property_name>.<type> syntax. Personally, I'd go with the decorator syntax, because I do find it much more readable and easier to maintain.

Be aware, though, that in terms of the naming, it's a shortcut and a necessity to use the same function name for accessor-abstraction methods.

All the methods are now named value -- they're all the same. This is necessary to make the code work. Since we're no longer establishing a new element of the class to use as our public interface to adhere to our internal design "contract", we need all the function names to "represent" that public element. We'll usually have multiple property instances inside a class, so the only way to coordinate them with the @ (decorator) syntax is to make sure the methods all have the same name.

Here's a class with two public accessor abstractions, to clarify that point (with a Point):

class Point(object):
  e0 = None;
  e1 = None;
  @property
  def x(self,):
    """Access this point's x-value (e0)."""
    return (self.e0);
  # fed
  @x.setter
  def x(self, new_x,):
    """Update this point's x-value (e0)."""
    self.e0 = new_x;
    return (None);
  # fed
  @x.deleter
  def x(self,):
    """Delete this point's x-value (e0) by setting it back to `None`."""
    del self.e0;
    return (None);
  # fed
  @property
  def y(self,):
    """Access this point's y-value (e1)."""
    return (self.e1);
  # fed
  @y.setter
  def y(self, new_y,):
    """Update this point's y-value (e1)."""
    self.e1 = new_y;
    return (None);
  # fed
  @y.deleter
  def y(self,):
    """Delete this point's y-value (e1) by setting it back to `None`."""
    del self.e1;
    return (None);
  # fed
# ssalc

We could now, using some Python tuple-unwrapping syntax, do the following:

p0 = Point();

(p0.x, p0.y,) = (3, 7,);

Again, this is a trivial example that doesn't do anything special that you wouldn't get from just doing this instead:

class Point():
  x = None;
  y = None;
# ssalc

That class has the exact same functionality, even without any methods, and this is still valid:

p0 = Point();

(p0.x, p0.y,) = (3, 7,);

So, the real deciding factor in using the property syntax should be if it's necessary or useful to your code. You get the functionality innately since all members of all classes are public.

This last section goes through some personal advice I'd like to share about where, when, and how often (or not) to use decorators in your Python software development.

As the previous sections showed, decorators and wrappers add obfuscation that results in psuedo-transparent functions. It can seem clear, but really there are layers of abstraction that can be hard to follow in the moment. The last thing you want to do is introduce a decorator into a substantial codebase, cause a bug, and be unable to trace it due to too many abstractions and wrappers and convoluted aliases.

It's a good rule-of-thumb to always explicitly justify a wrapper. If there's no justification for its use, then question it. And if it doesn't hold up under scrutiny, you're probably better off removing it.

For example, I was once tasked to update and maintain a codebase that (among other things) overused the @classmethod decorator. In order to implement logging, a custom logger class was created and it was backhandedly injected into all other worker classes through an alias-shuffling registration by using @classmethod functions. We'll come back to this example, but essentially it was a convoluted reversal of the Observer Pattern code-design.

To clarify this, let's look at what the @classmethod decorator does. This decorator modifies class-methods (member functions of a class) such that when called they get passed their class, not the current instance, as their first unnamed ("positional") parameter.

Here's an example:

class ExampleClass(object):
  name = "Example";
  label = None;
  def __init__(self, name, label):
    self.name = name;
    self.label = label;
    return (None);
  # fed
  @classmethod
  def print_default_name(cls,):
    print(cls.name);
    return (None);
  # fed
# ssalc

Now, as you can see, the print_default_name method was decorated and we changed the first parameter from the conventional self to cls. This wasn't arbitrary, we're now getting the class itself, not the current instance of the class (an object) when we call this method.

Arguably, the example above is perfectly valid and useful. However, the same functionality can be achieved without the @classmethod decorator by simply creating another variable to hold the default name external to the class. I'd argue that such an approach would be more readable and a lot simpler to follow.

DEFAULT_NAME = "Example";

class ExampleClass(object):
  name = DEFAULT_NAME;
  label = None;
  def __init__(self, name, label):
    self.name = name;
    self.label = label;
    return (None);
  # fed
  def print_default_name(self,):
    print(DEFAULT_NAME);
    return (None);
  # fed
# ssalc

Or another approach:

class ExampleClass(object):
  name = "Example";
  label = None;
  def __init__(self, name, label):
    self.name = name;
    self.label = label;
    return (None);
  # fed
  def print_default_name(self,):
    print(ExampleClass.name);
    return (None);
  # fed
# ssalc

Again, it's all a matter of preference and intention, but I'd suggest that the above two approaches are a lot easier to read/follow than the @classmethod version. These approaches also pass the current instance to the print_default_name method, which allows this code to be more easily extensible, and could arguably do more than the @classmethod version. For example, say that we wanted to reset the name to the default value instead of just printing it. That can't be done with the @classmethod version, but we can easily change the third version:

class ExampleClass(object):
  name = "Example";
  label = None;
  def __init__(self, name, label):
    self.name = name;
    self.label = label;
    return (None);
  # fed
  def reset_name(self,):
    self.name = ExampleClass.name;
    return (None);
  # fed
# ssalc

Using @classmethod would have put the code into a design that would've required more redesign and may have been harder for someone unfamiliar with the code to figure how to get from there to here. There's often an overt implication among Software Engineers that can make others fearful to change the original code too much, even when adding a new feature or fundamentally changing the design. Especially when code is undocumented, it makes it harder for someone who comes along and needs to maintain or edit it. In the absence of any explanation, people may tend to err on the side of assuming that the original writer of the code knew what they were doing -- which isn't always true.

The goal should certainly be minimal code and minimal changes, but that should also measured by whatever is most readable. The arguments for minimal code and minimal changes exist because if there's less code, it's easier to read, and if the changes between version-commits are minimal, it's easier to follow the evolution of the design. These things all exist under the goal of readability, which should be first and foremost.

So, in this case, @classmethod is a roundabout (and possibly misleading) solution to a simple problem.

Getting back to the example I originally mentioned: in my circumstance, the code I was working on was originally written using @classmethod all over the place to avoid module-level logger singletons (single instance objects of a class), all in a convoluted design to share a logger across worker objects.

In this situation, it felt like a design that was copied from a simple online example that had all gotten a little out of hand. It certainly worked, but not only was it difficult to read, it also led to some overzealous "consistency" where @classmethod got added where it wasn't even used or needed. And, in trying to create new worker classes that needed logging functionality, the ability to register a new class with a logger required a deep, esoteric understanding of the @classmethod convention in situ that didn't really naturally lend itself to reusing shared-code by creating derived classes.

Obviously, every situation is going to be unique and different. I'm not saying never use decorators, I'm just saying don't always use decorators.

In one-off examples you'll find online you may notice people like using the @property decorator, since it collapses two methods into one, and reinforces encapsulation by using a get/set dual-purpose method instead of just "publicly" accessing and modifying class elements.

However, take that with a grain of salt, understanding a few key facts:

  • There are no actual public, private, or protected attributes in Python classes.
  • Getter and Setter methods are helpful when doing complex updates, but arguably direct access requires much less code.
  • People giving code examples online are trying to be abstract, generic, and terse. A quick and simple example is more readable, but that inherently makes it unlikely to be ubiquitously practical. Your codebase is going to be specific and contrived to your purposes, and your two goals should be functionality and readability. If @property isn't helping really enhance either of those, then what benefit do you get from just copying it because it's what's shown everywhere online?

So, again, just because Python offers these decorators, that doesn't mean you have to use them. In fact, I would say to err on the safe side and avoid them until you can do your own R&D and test them out and see what works for you and what doesn't.

It's much easier to add decorators to extant code to gain functionality, than it is to remove decorators and maintain the same functionality.

Lastly, I think it's fair to say that the decorator syntax is arguably unreadable without esoteric knowledge of Python. So, as soon as you start decorating your functions, you're going to need to document and justify why you're doing it. Otherwise, your code may end up being so terse that it becomes unreadable without consulting documentation and tracing the runtime, which kinda defeats half of the purpose of the code.

Computers read and write binary numbers, people read and write code -- so write your code to be read by a person, and don't try to outsmart the computer, or you're gonna have a bad time.

I hope this helps! Good luck!

image/svg+xml