Tuesday, 29 August 2017

A context manager for testing whether an object has changed

Firstly an apology for the break of well over a year in this blog; I was busy with other projects and took my eye off this series.

In my current python project, I have a class which will under some conditions create a clone of an instance of itself and then make changes to the cloned instance.

It is critical to the future operation that the the changes are only made to the clone, and that the original is not changed; and I wanted to find a simple way to test this.

My first try was to do this (as an example) :

from copy import copy

class ClassUnderTest:
    def __init__(self, value):
        """Example Class to be tested"""
        self.x = value

    def operation(self,value):
        """Create a new instance with the x attributed incremented"""
        self.x += value
        clone = copy(self)
        return clone

def test1():

    inst1 = ClassUnderTest( 5 )
    inst_copy = copy(inst1)

    inst2 = inst1.operation(1)

    assert inst1 == inst_copy

    assert inst2.x == 6

By taking a copy of the instance being tested, we can compare a before and after version of the instance. In this case inst1 is reference to the instance being tested, so if the operation changes the instance (as it does in this case), then inst1 will change. The copy though wont change at all, so comparing the two objects (as in the first assert statement), will confirm if the operation has changed the original instance rather than the new instance.

This approach has a number of issues :
  1. It is a pain to do this across 20 or 30 test cases
  2. It isn't that readable as to what this test is doing or why
  3. It only works if the class under test implements the __eq__ and __hash__ methods
I realized that a solution with a context manager would at least in part solve the with first issue;  if I could write something like this :

def test1():

    inst1 = ClassUnderTest( 5 )

    with Invariant(inst1):
       inst2 = inst1.operation(1)

    assert inst2.x == 6

Assuming the context manager works - more on that in a bit - this is eminently more readable. It is clear that the expectation that within the with block, is that the inst1 is invariant (i.e. it doesn't change). There is far less boiler plate, and the test function is clearly executing tests, rather doing stuff to make the test possible.

My 1st version of the invariant context manager was :

class Invariant():
    def __init__(self, obj):
        """Context manager to confirm that an object doesn't change within the context"""
        self._obj = obj
        self._old_obj = copy(obj)

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        # Don't suppress exceptions
        if exc_type:
            return False

        if self.obj != self._old_obj:
            raise ValueError(
                    'Objects are different :\n{original!r}\n     now : {now!r}'.format(
                        self._obj, self._old_obj)) from None

The issue with this version is that it still relies on the class implementing __eq__ and __hash__ methods; and the exception that is raised will only inform you that the instance has changed, not which attributes have changed.

A better approach would be to compare each attribute individually and report any changes - thus this version :

class Invariant():
    def __init__(self, obj):
        """Conext manager to confirm that an object doesn't change within the context

           Confirms that each attribute is bound to the same value or to a object
           which tests equal this is a shallow comparison only - it doesn't test
           attributes of attributes.
        self._obj = obj
        self._old_obj = copy(obj)

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        # Don't suppress any exceptions
        if exc_type:
            return False
        # Compare each attribute
        for att in self._obj.__dict__:
            if getattr(self._obj, att, None) == getattr(self._old_obj, att, None):
                raise ValueError(
                    '{name} attribute is different : \n'
                    'original : {original!r}\n'
                    '     now : {now!r}'.format(
                            now=getattr(self._obj, att,None),
                            original=getattr(self._old_obj, att,None) ) ) from None

This is far better but it does have a few issues still - left for the reader :
  • It requires attributes of the class to have implemented __eq__ and __hash__; not an issue if all attributes are simple builtin classes (int, float, list etc ..)
  • It will report each changed attribute one at at time, not all at once
  • While it will spot any attributes that have either changed or have been added to the instance, it wont detect any attributes which have been deleted.
  • It won't work with classes which use __slots__
I hope you find this useful.

Saturday, 7 May 2016

Python Weekly #12 - Decorators 101


Decorators 101 - how they work, and how to write them.

For someone new to Python, Decorators are one of the more baffling of the language features. In this post, I will explore why one might need decorators, and how to write them.
Simply put a decorator is a way to extend the functionality of a function or a method without changing the internal implementation of the function or crucially without changing the calling signature.

Examples of when to use decorators include :
  • To add logging to multiple functions
  • To add common code safety checks to multiple functions
  • To transform function arguments or return values

Calling Signature

The calling signature is the list of arguments that you pass to the function, and the values that the function returns. So long as the calling signature is unchanged, other code can continue to call the function as normal, as if they decorator hadn't been applied.

The Basic Principles

In python everything is an 'object' - this includes numbers, strings, and crucially here, functions. Since functions are objects in Python they can be passed as arguments to other functions, and they can be returned by other functions. Objects are `bound` to names - so this code
>>> spam = 'Spam and Eggs.'
Creates a string object - and binds it to the name spam. Names given to functions are no different. A function is an object, which is bound to a name. This means that we can do some interesting things with functions :
  • We can rename them, by assigning a different name
  • We can pass them as arguments to other functions
  • We can store them in lists, dictionaries or anywhere else
  • We can call the function from anywhere we have the function object, so long as we honour the calling signature. It is irrelevant what name we currently use, or if the function has a name at all.  
  • We can return functions from functions
It is these properties that allow us to write decorators.

Functions as arguments

>>> def greeting():
...    print 'Hello from John, Paul, George & Ringo'

>>> def logme( function ):
...    print 'INFO: Calling {} function'.format( function.__name__)
...    return function()

>>> logme( greeting )
INFO : Calling greeting function
Hello from John, Paul, George & Ringo
We are not quite at a decorator yet - but we have been able to pass a function (the greeting function) as an argument into the logme function, and then call it from the logme function. You will notice that the logme function does not use greeting by name - but it uses it's own function argument.

Although this works, it isn't very useful, as we have to remember to call the logme function, and pass the greeting function as argument, each time we want this function call to be logged, and if the greeting function had it's own arguments we would need a different version of 'logme', it would get messy to call greeting each time. Finally it is no longer obvious from the code logme( greeting ), that we are even calling the greeting function at all - all in all it makes the code more difficult to write and to read.
Before we get to a full decorator, we have one more python feature to explore - nested functions :

Nested Functions

>>> def outer( arg):
...    def inner( arg2):
...        print  'inner called with {}'.format(arg2)
...        return arg*arg2 + 1
...    print  'outer called with {}'.format(arg)
...    return inner( arg  + 1) 
>>> outer( 5 )
outer called with 5
inner called with 6
>>> inner( 6 )
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'inner' is not defined
There are a few things to note here :
  • The inner function has access to the arguments passed to the outer function. If you try it you will find that inner can change the value of arg, but that change is not visible in the outer function, even after it calls inner.
  • The inner function can only be accessed inside the outer function. As the error message shows inner can't be called from elsewhere.
Instead of having outer return the result of calling inner, what if outer just returned the inner function itself without calling it:

>>> def outer( arg):
...    def inner( arg2):
...        print  'inner called with {}'.format(arg2)
...        return arg * arg2 + 1
...    print  'outer called with {}'.format(arg)
...    return inner
>>> five = outer( 5 )
outer called with 5
>>> six = outer( 6 )
outer called with 6
Now the outer function doesn't return a value - it returns the inner function, and the inner function is never called (well not yet). Strictly speaking the outer function returns a function object. In our code above we call outer twice, and bind the results to the names five and six respectively. We know that the original inner object was expecting a single argument (arg2), and we know that five and six are bound to a function object so - we should be able to call them :
We can see that calling five and six generate different results - and working through the code, you can see that the five function behaves as we would expect if arg is 5, and five function behaves as we would expect if arg is 6. You will find this to always be true - when you define nested functions, and the inner function uses arguments or variables from the outer function, then the inner function will be locked to the values as they are when the inner function is defined. So lets glue this all together - and build a decorator

Our first decorator

>>> def logme( func ):
...     def wrapper( *args, **kwargs ):
...         print 'INFO : calling {}( {}, {} )'.format( function.__name__, args, kwargs)
...         return func( *args, **kwargs )
...     return wrapper
>>> def greeting( name ):
...     print 'Hello {}'.format( name )
>>> def goodbye( name ):
...     print 'Goodbye {}'.format( name )
>>> greeting( 'Tony' )
Hello Tony
>>> goodbye( 'Tony' )
Goodbye Tony
>>> greeting = logme( greeting)
>>> goodbye = logme( goodbye)
>>> greeting( 'Tony' )
INFO : calling greeting( (Tony,), {})
Hello Tony
>>> goodbye( 'Tony' )
INFO : calling goodbye( (Tony,), {})
Goodbye Tony
If you aren't sure - work through the logme function. Notice that it will return the wrapper function, which in turn uses the func argument which was passed to logme. If you haven't seen the *args and **kwargs notation - this is argument unpacking. You can see at the bottom of the code snippet that we redefine the names greeting and goodbye (remember function names are simply names, and they can be used and reused as we wish). We now have new functions which have added functionality, and we can add this functionality very easily to any function we wish. This ability is so useful, there is a very simple syntax which removes the need for use to redefine the function names :
>>> def logme( func ):
...     def wrapper( *args, **kwargs ):
...         print 'INFO : calling {}( {}, {} )'.format( function.__name__, args, kwargs)
...         return func( *args, **kwargs )
...     return wrapper
>>> @logme
>>> def greeting( name ):
...     print 'Hello {}'.format( name )
>>> @logme
>>> def goodbye( name ):
...     print 'Goodbye {}'.format( name )
>>> greeting( 'Tony' )
INFO : calling greeting( (Tony,), {})
Hello Tony
>>> goodbye( 'Tony' )
INFO : calling goodbye( (Tony,), {})
Goodbye Tony
The @logme line before each function is a special syntax which causes the function to be wrapped in the decorator (in this case the 'logme' decorator), and saves you having to redefine the function name each time. Just remember that
>>> @logme
>>> def greeting( name ):
...     pass 
... # The code does EXACTLY the same as the code below 
>>> def greeting( name ):
...     pass 
>>> greeting = logme(greeting)
You should now have a better understanding of how to write simple decorators, and how they work. The key rules for these simple decorators are :
  1. The Outer function takes one argument - that is the function which will be decorated
  2. The name of the outer function is the name of the decorator
  3. The Outer function is called whenever the decorator is used - i.e. whenever the `@logme' is used in the above example.
  4. The Outer function should only return the inner function. 
  5. The inner function takes the same arguments as the function being decorated (or more usefully *args & **kwargs)
  6. In general it is the inner function which implements the new functionality (either before or after calling the function)
  7. The Inner function can return anything at all, but ideally it should return the same type as the function being decorated (or raise an exception) - remember the calling signature. 
  8. The inner function is called every time the decorated function is called.
There will be other posts in this Decorator series - watch this space.

Saturday, 26 December 2015

Weekly Python #11 - Duck Typing

Duck Typing

If it walks like a duck, and quacks like a duck ....

If you are new to Python you may well have heard of Duck Typing but you might not be too sure of what it means.  In this article we will explore Duck Typing with one of the most powerful concepts in Python : Iterators, a very powerful example of duck typing.

An Iterator is any object which implements functionality to allow the user to move through a collection of data one item at a time. Iterator behaviours are both simple to use and implement. Many parts of the python standard library implement iterators : strings, lists, dictionaries, tuples, generators, and even files.

Because every iterator shares the same behaviour, then your application can write one function which will work with every single iterator - no matter what type it is, and your application can implement it's own iterators, and the same functions which work on standard library iterators, will work the same way on your iterator too.

That is the theory, lets see something in practice :

>>> def dup( iterator ):
...     "A Generator which will duplicate each item in the given iterator"
...     for item in iterator:
...         yield item
...         yield item
The function above will take any iterator, and will return a generator which yields each item in the original duplicated. This function relies on Duck Typing: it needs whatever is passed in as the iterator argument to be implemented as an Iterator: It will work on any standard iterator (Note that Iterables (such as Lists, strings, tuples) are also Iterators, where as generator expressions, files are only Iterators).
>>> # A list of numbers
>>> [i for i in dup([0,1,2,3,4,5,6,7,8,9])]
>>> # A String
>>> [i for i in dup("abcdef")]
['a', 'a', 'b', 'b', 'c', 'c', 'd', 'd', 'e', 'e', 'f', 'f']
>>> # A Tuple
>>> [i for i in dup( (0,1,2,3,4,5,6) )]
[0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6]
>>> # A Generator expression
>>> [i for i in dup( (i for i in range(0,7) )]
[0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6]
This function would also work if you passed in an open file object, and would generate a duplicate for every line in the file, and with a dictionary it will generate a duplicated sequence of keys.

With a language such as C or C++, you will struggle to implement such a generic function, which would operate on strings, vectors, arrays and files, since C is a statically typed language which means you need to declare ahead the arguments that your function, and strings, vectors, arrays are all different types.

With Duck typing, as we have seen it isn't the type that matters, it is the behaviour (in this case data which behaves like an Iterable). In general you don't even worry about testing that the arguments are the correct/expected type (let the lower levels worry about raising exceptions and cascading them upwards); if your code is using hasattr or isinstance to check the type or behaviour exists before you use it, then you aren't utilizing the full power of Duck-Typing.

 Custom Errors

You might consider that if you need to raise a custom error rather than use the exception generated by the  lower level code, then the right thing to do is to check that the attributes have the right behaviour before you use them :

>>> def dup( iterator ):
...     "A Generator which will duplicate each item in the given iterator"
...      if not hasattr(iterator,"__iter__"):
...           raise TypeError("You can only duplicate items from an iterator")
...      for item in iterator:
...          yield item
...          yield item

Although this will work, it is not the best use of the language, and not very Pythonic (i.e. best practice for Python code). A Better solution is to use the power of Duck Typing, and exception handling.
>>> def dup( iterator ):
...     "A Generator which will duplicate each item in the given iterator"
...      try:
...          for item in iterator:
...              yield item
...              yield item
...      except TypeError:
...          raise TypeError("You can only duplicate items from an iterator")
Not only is this version best practice (if you really insist on a custom Exception message), but it will be quicker as it wont need to execute the hasattr test each time the function is called.

using isinstance or hasattr

There is one case where using isinstance or hasattr is completely valid thing to do (and even recommended) and that is when your code is taking data as an argument into a class __init__ which needs to be an expected type, but your code doesn't use that data until much later (as it is always better to report errors as soon as possible).

Tuesday, 22 December 2015

Weekly Python #10 - Employing disabled workers

Employing disabled workers

This article is a short post which has nothing to do with Python, but is about my other area of interest : Disability and Employment rights.

In most modern jurisdictions (certainly true in the EU), it is an obligation of employees to treat disabled workers fairly and not show any bias due to their disability. The definition of disability is pretty wide; for instance in the UK it is generally defined as anyone with a recognized chronic (i.e. long term) condition.

For an employer with worker who becomes disabled, it is responsibility of the employer to make reasonable adjustments to the job to ensure that the worker can still do their role, but what does the obligation to fairness mean when you are looking to take on a brand new employee.

During both CV and interview stage a prospective employee should concentrate on the skills of the applicants - and what they can bring to the organization, rather than what they might not be able to do based on their disability. All relevant skills and experience should be seen as positives for that candidate, and training needs are clearly costs for employing that candidate. My understanding of the law [1] suggests that if there are extra costs associated with employing a disabled candidate that these should not be counted against that person unless they would be unreasonable for the company to bear.

I have an example recently which is worth recounting :

My current employer (I am not giving any names) has a strategy to try to bring everyone together into a number of key locations (in order to improve collaboration, and reduce infrastructure and building costs). The strategy has clearly stated exceptions for employees who are disabled or otherwise unable to be based in one of these locations. I recently had a telephone interview for a design role: being a member of a team completing software and network designs.

During the interview though, I wasn't really asked about my skills or experience, or what I could bring to the role. The interviewer focused almost entirely on the company's buildings strategy and how me working from home was incompatible with that strategy, and that it was "a waste of time" to talk to me any further. The role had no operational need for everyone to be be based in the same building; no secure networks, no key customers to work with daily. There was no rationale given for why being in the office was essential to the role, although I can understand that one person working from home will impact how the team works together (most of which could be overcome with an appropriate use of technology). The only "explanation" given was the building strategy, despite it not being mandatory for all employees.

I am not suggesting that every role can be executed efficiently by everyone (a person in a wheelchair would probably find it difficult to be a scaffold rigger), but there are many cases where a disabled person would be able to do a given role just as efficiently as a non-disabled person, and therefore the critical decision should be whether that person has the right mix of technical, business and personal skills to do the role.

I accept that there is sometimes a fine line between two candidates and I am not suggesting that an employer should always choose the disabled candidate, but the employer should be looking at a disability as something to adjust to, rather than something which prevents the job from being done.

[1] I am not an employment lawyer, or a trained recruiter. I am a s/w designer and developer with 27 years experience of systems and network experience, and 2 years experience of living with a disability.

Sunday, 13 December 2015

Weekly Python #9 - Adding to a list - only one way ?

Adding to a list

Is there really only one way ?

It is one of the principles of python that "There should be one-- and preferably only one --obvious way to do it." You will notice though when it comes to many of the parts of the standard language, that there is actually many different ways to do somethings - for instance adding a single item to a list.
>>> a = [0,1,2,3,4,5,6,7,8,9,10]
>>> a.append(11)
>>> a += [12]
>>> a = a + [13]
>>> a.extend([14])
>>> a
All of these appear to do the same things, but behind the scenes they all do different things - and have different advantages.


Using the append method is by far the most obvious method : It is very readable (and readability is key when writing software). It is implemented a single method call on the list method, and once the append is complete there is no return value. it all makes for a lightning quick execution
$ python -m timeit -n 1000000 -s 'a = [i for i in range(1000)]' 'a.append(1001)'
1000000 loops, best of 3: 0.0452 usec per loop
0.0452 micro seconds (a micro second is one millionth of a second) to append 1,000,000 elements to a list.

Using +=

For some people this idiom is just as readable as an append but it is actually a bit less efficient, as there a few things that have to be done before the item can be appended to the list.
  • the element (12 in our case) has to be made into a list
  • The __iadd__ method is called
  • The __iadd__ method does a type check to ensure that the argument is a list
$ python -m timeit -n 1000000 -s 'a = [i for i in range(1000)]' 'a += [1001]'
1000000 loops, best of 3: 0.081 usec per loop
Using the += method takes nearly 2 times longer than using append for adding single elements.

It would probably be far more efficient to use this method to add another existing list (rather than looping around the list and appending each element) - we will look at this later

Using + 

A first glance doing a = a + [13] is going to be similar in performance to the previous statement a += [12] but there is a lot more going under the skin here.
  • the element (12 in our case) has to be made into a list
  • The __add__ method is called
  • The __add__ method does a type check to ensure that the argument is a list
  • The __add__ method creates a brand new empty list to hold the result of the addition, and will need to copy the original a list and argument list into this new empty list. The name a will be bound to the new list returned by the __add__ method.
These difference go some way to explaining to performance differences :
$ python -m timeit -n 1000000 -s 'a = [i for i in range(1000)]' 'a = a + [1001]'
10000 loops, best of 3: 25.4 usec per loop
So using a straight list addition as above - it takes over 500 times longer to add a single element to a list; and it can't be recommended for use. It will also use more memory (as it creates this brand new list and fills it, before the old one is unbound), and this could also be significant for your application.


This significant performance impact seems to not be connected to the size of the original list - even adding a single element to an empty list takes a similar amount of time, and creation of a new empty list doesn't seem to be the issue either. At the time of writing the performance impact seems to be caused by the re-binding of the new list to one of the names in the expression.

The only advantage is that if you have another name bound to the original list, then that original list wont be changed - but there are better ways to ensure that this happens - by taking a copy of the original list before you change it :
>>> a = [0,1,2,3,4,5,6,7,8,9,10]
>>> b = a[:]            # Take an explicit shallow copy of a
>>> a.append(11)        # Use append - which we know is a lot quicker
>>> a, b
[0,1,2,3,4,5,6,7,8,9,10,11], [0,1,2,3,4,5,6,7,8,9,10]


The extend method is intended to be used to combine two lists together; a.extend(x) is equivalent to a += x (where a and x are both lists). We can also use extend to add single elements to a list :
$ python -m timeit -s 'a = [i for i in range(1000)]' 'a.extend([1001])'
10000000 loops, best of 3: 0.0924 usec per loop
So extend is approximately equivalent to using a += x - as expected.

Adding multiple elements efficiently

So far we have looked at adding single elements to a list, but often our code needs combine two lists; so it would be useful to look at the relative performance of a += x vs a.extend(x} where x is a list.
$ python -m timeit -s 'a = [i for i in range(1000)]' -s 'b=[i for i in range(1001,2001)]' 'a += b'
100000 loops, best of 3: 2.25 usec per loop
$ python -m timeit -s 'a = [i for i in range(1000)]' -s 'b=[i for i in range(1001,2001)]' 'a.extend(b)'
100000 loops, best of 3: 2.3 usec per loop
So within a margin of error the two are roughly equivalent. Just out of interest how much slower is appending each element :
$ python -m timeit -s 'a = [i for i in range(1000)]' -s 'b=[i for i in range(1001,2001)]' 
'for e in b:' '   a.append(e)'
10000 loops, best of 3: 47.2 usec per loop
Appending each element of a 1000 element list is 23 times slower than adding the two lists together, and it is not unreasonable to expect that the gap between the two approaches will increase as the lists get longer.


Although (on the face of things) there are 4 different methods for adding an element to a list, and at least 3 for combining lists together, there is really only one way that you should use; append for single items and extend of combining lists. The guiding "One obvious way" principle espoused by Python experts still holds, although sometimes it isn't obvious which way is the obvious one.

Friday, 4 December 2015

Python Weekly #8 - I know Python - now what ?

I know Python - so now what ?

In this article I wanted to write a few words about a question that I have seen regularly on a number of different support forums. The question is generally of the form given in the title : New programmers have adsorbed the syntax and control structures of Python, and now want to know what to do with it.

Have you really learned Python ?

The first thing that springs to mind is whether the person asking the question has really learned the language at all. The syntax of Python is relatively simple, and the basic data types (str, list, int, bool etc) are relatively easy to learn and intuitive, but Python is so much more than that - for instance - do you know what this does before trying it :
>>> import antigravity 
This is just one (albeit just a bit of fun) of over 100 different importable modules that come delivered with python (The Standard Library) that allows you to do wonderful things with it. Beyond that there is the world of pyPi (python Package index) - a world of over 70,000 modules and packages developed across the world and freely available for you to use in  a few easy steps.

These package provide functionality that far extends the basic syntax, and it is all reusable for free.

Now I am not suggest you should know everyone of the 70,000+ modules on pyPi or all the details even of the 100+ modules that form the Standard Library, but you should be aware that they exist, how to find them, how to read the documentation, and at least have a working knowledge of some of the main ones (See my previous article Batteries included - 5 Standard Library Packages everyone should know). In the case of the pyPi contributions you should also know how to install these to your local environment (Hint : you will need to use pip).

Practice, Practice, Practice

The best way to learn anything and get proficient at it is to continue to practice. There are a number of ways to hone your skills in Python and these include :
  • Project Euler : "a series of challenging mathematical/computer programming problems that will require more than just mathematical insights to solve. Although mathematics will help you arrive at elegant and efficient methods, the use of a computer and programming skills will be required to solve most problems". A heavy mathematical bent, but well worth trying some to hone your skills, especially when it comes to writing efficient and performant code. There are 526 Project Euler problems created at the time of writing.
  • The Python Challenge : Billed as "The first programming riddle on the internet", this is a chain of problems which are solved by deciphering a set of clues, and then writing a short scripts to calculate/derive the answer - which is then the form of the URL for the next riddle". For the most part they rely on the standard library. There are 33 steps to the riddle currently.
  • Python programming challenges : A small set of 7 programming challenges set to support the UK GCSE Computer Programming qualification (GCSE is aimed at students aged around 16).
  • CodingBat Python problems : "CodingBat is a free site of live coding problems to build coding skill in Python created by  who is computer science lecturer at Stanford. The coding problems give immediate feedback, so it's an opportunity to practice and solidify understanding of the concepts.
These (and similar sites) allow you to practice without the difficulty of trying to come up with your own ideas - and if you had your own "Big Idea" then you wouldn't be reading this article.

What are you interested in ?

If you really want to get moving on your own big idea, then my advice is to look around you - is there something in your life, that you do that would benefit from a computer program to help you do it. This could be something from your work environment or your home life (Caution if you are going to tackle a work based problem - check that you wont be in breach of any work based security rules before you start. I would hate for you to loose your job because you followed my advice, identified a work problem and  installed Python on your work PC/laptop without permission).

The reason behind this advice is simple: If you need something, (or even think you need something) it is far more likely that you will work on it until it is complete (or at least good enough). If you start on something that you don't need, or you are not enthused about, you will be far less likely to complete it.

I will give you an example based on my first application developed in 2008.  I am an amateur photographer, and I use flickr to store the majority of my pictures. At the time the flickr uploader on the website was clunky, slow, unreliable and required a lot of work post upload to add tags, titles, to the pictures, as well needing to add pictures to folders etc. It was not suprising then that my first project was to write my own PC based uploader, that allowed me to take groups of pictures - add them into folders, add tags, titles, descriptions,  change permissions etc. and then to upload them to flickr at a single click.  This application required me to work with a number of different libraries and packages to complete the functionality including :
  • pyGTK  - Python bindings for the Gnome Toolkit - A fulll featured GUI framework
  • PIL - Python Image Library - an image manipulation toolset
  • flickrapi - A Python interface onto the http based Flickr API.
I completed this - and used it "in anger" for several years (fixing bugs and adding more features) as I went along  until in around 2012 the web site based uploader was re-developed and is now far better than it was. My pyFlickr application although complete is now never used.

I have to admit that my development directory is littered with programming projects which I have started but never completed - mainly because at the end of the day I didn't need it.

Get Involved

If you haven't got a project that you need to write, but you still want to keep your Python skills fresh, then look around for a project that others have started but that you can work on (by it's very nature much of the work published on pyPi and other places is Open Source, and many projects are looking for collaborators, either to fix bugs or assist in developing new features). Find a project that interests you and contact the authors.

Many of the bigger projects will require some proof of experience and knowledge of python - a project you have completed yourself - or contributions to other projects. Nearly all projects will require you to have a good working knowledge of one of the python testing frameworks and knowledge of how to use the chosen method of change control, and I would suggest that you should be proficient in being able to profile your code in terms of performance and memory usage, and in being able to measure metrics like code coverage etc.

Sunday, 29 November 2015

Python Weekly #7 - An Easy to use extend plugin framework

An easy to extend plugin framework

In a number of my projects I have made several iterations of developing a plugin framework - that is, a way that functionality can be extended easily (by me, or anyone else) without having to explicitly editing the main code.
An example might be a game where the user has to manage a number of different types of resources produced by different types of buildings, and with specialist people/aliens/trolls etc staffing those buildings and using those resources. With an efficient plugin system, it is relatively easy to imagine  the base game defining a starting set of buildings, resources, and types of workers/aliens etc, and then be able to extend the game adding new buildings, new resources etc by simply adding a new plugin, and without anyone editing the main game.
A plugin system has to have the following characteristics : 
  1. Automatic Discovery : Newly added plugins should be able to be automatically discoverable - i.e. the application should be able to find new plugins easily without any user intervention
  2. Clear Functionality : It should be obvious to the application what "type" of functionality the plugin adds (using the game example above does the plugin add new resources, new buildings or new workers - or all 3 ?).
  3. Is there a simple way for the user to use the plugin once it is added; for instance are plugins added to existing menus, or do they create obvious new menus etc.
This article is going to describe a simple framework that can be used to extend your python applications. The framework certainly addresses the first two points of the list above, and I will give you some pointers on the 3rd item, as there is no generic solution to it - it really does depend on your application

Source Code

Source code to accompany this article is published on GitHub : plugin framework. This includes a package which encapsulates the code within a simple to use class, an example very simple application (showing how to use the framework), and a very simple skeleton plugin, showing the very simple requirements that any plugin should implement to work with this framework.

1 -Automatic Discovery

This is actually three parts; can we find the code that forms the plugin, can we load this code (a python module or package), and can we identify which parts of this loaded code are actually the plugins and which are supporting code only being used by the plugin.

Finding the python code

Perhaps unsurprisingly, this is the simplest problem to address - the application can keep all of the plugins in a standard directory (or maybe two - a system wide and user specific directory) :
import sys
import os

def get_plugin_dirs( app_name ):
    """Return a list of plugin directories for this application or user

    Add all possible plugin paths to sys.path - these could be <app_path>/plugins and ~/<app_name>/plugins 
    Only paths which exist are added to sys.path : It is entirely possible for nothing to be added.
    return the paths which were added to sys.path
    # Construct the directories into a list
    plugindirs = [os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), "plugins"),
                  os.path.expanduser("~/.{}/plugins".format(app_name) )]

    # Remove any non-existant directories
    plugindirs =  [path for path in plugindirs if os.path.isdir(path)]
    sys.path = plugindirs + sys.path
    return plugindirs


The get_plugin_dirs function presented above relies heavily on the os.path library, since this is the most portable way to ensure that the application correctly constructs valid file paths etc.

We have a list of zero or more directories which may contain plugin code, so - lets identify code in those directories.
In Python code could exist as either :
  • An uncompiled python file, with the `.py` extension.
  • A compiled python file, with the `.pyc` extension
  • A C extension, with `.so` extension (or similar)
Thankfully - python makes it very easy to identify all of these files : use imp.get_suffixes() (in python 3.5 you should use importlib.get_suffixes()). Because of the features we want to use later we actually only want to use the python files (compiled and uncompiled) - and not any of the C extensions.

Plugins written in C ?

If you are adept enough to write an extension in C which you want to use as a plugin, then you can also easily write one or more wrappers in Python around you C extension code so that it complies with our framework - more on that later.
import imp
import importlib

def identify_modules(dir_list):
    """Generate a list of valid modules or packages to be imported

    param: dir_list : A list of directories to search in
    return: A list of modules/package names which might be importable
    # imp.get_suffixes returns a list of tuples : (<suffix>, <mode>, <type>)
    suff_list = [s[0] for s in imp.get_suffixes() if s[2] in [imp.PY_SOURCE, imp.PY_COMPILED]]
    # By using a set we easily remove duplicated names - e.g. file.py and file.pyc
    candidates = set()

    # Look through all the directories in the dir_list
    for dir in dir_list:
        # Get the content of each dir - don't need os.walk
        dir_content = os.listdir(dir)

        # Look through each name in the directory
        for file in dir_content:

            # Does the file have a valid suffix for a python file
            if os.path.isfile(os.path.join(dir,file)) and os.path.splitext(file)[1] in suff_list:
            # Is the file a package (i.e. a directory containing a __init__.py or __init__.pyc file 
            if os.path.isdir(os.path.join(dir, file)) and
                      any(os.path.exists(os.path.join(dir, file, f)) for f in ["__init__"+s for s in suff_list]):
    return candidates 
In the final discovery step - we need to see if any of the identified files actually implement a plugin, and for this step we can use a hidden gem of the Python Standard Library - the inspect library. The inspect provides functionality to look inside python modules and classes, including ways to list the classes within modules, and methods in classes (and a lot more besides). We are also going to make use of a key feature of Object Oriented programming - inheritance. We can define a basic PluginBase class, and use the inspect library to look at each of out candidate modules to find a class which inherits from the plugin class. In order to comply with our framework, the classes which implement our plugins must inherit from PluginBase.

Location of PluginBase

Currently we are presenting our framework as a set of functions - without identifying a module etc. If our plugin classes are going to inherit from PluginBase, then our framework, and especially PluginBase will need to exists in a place where it can be easily imported by our plugin modules. This is achieved by having the PluginBase class defined in a top level module, or a module in a top level package. (A top level module/package is one that exists directly under one of the entries in sys.path).
import inspect
import importlib

class PluginBase(object):
    def register(cls_):
        """Must be implemented by the actual plugin class
           Must return a basic informational string about the plugin
        raise NotImplemented("Register method not implemented in {}".format(cls_))

def find_plugin_classes(module_list):
   """Return a list of classes which inherit from PluginBase
   param: module_list: a list of valid modules - from identify_modules
   return : A dictionary of classes, which inherit from PluginBase, and implement the register method
            The class is the key in the dictionary, and the value is the returned string from the register method
   cls_dict = {}
   for mod_name in module_list:
       m = importlib.import_module(mod_name)
       for name, cls_ in inspect.getmembers(m, inspect.isclass):
           if issubclass(cls_, PluginBase):
                 info = cls_.register()
              except NotImplemented:
                  cls_dict[cls_] = info

   return cls_dict
And there we have it - the basis of plugin characteristic #1- That the plugin is automatically discoverable. Using the code above - all that the Plugin implementation needs to do is to be in a module which exists in one of the two plugin directories, be a class which inherit from the PluginBase class, and implement a sensible register method.

2 - Clear Functionality

The clear functionality characteristic is incredibly easy to implement, again using inheritance. Your application will have a number of base classes which define the basic functionality that each element of your game will implement, so just ensure that your plugin classes inherit from one of these classes.

import collections

def categorise_plugins(cls_dict, base_classes):
    """Split the cls_dict into one or more lists depending on which base_class the plugin class inherits from"""
   categorise = collections.defaultdict(lambda x: {}) 
   for base in base_classes:
       for cls_ in cls_dict:
          if issubclass(cls_,base):
             categorise[base][cls_] = cls_dict[cls_]
   return categorise
We can put all of this together into a useful helper class for the loading and unloading of the plugin functionality - see plugin_framework on GitHub for the full implementation, and a demonstration simple application.

3 - Simple to Use

How your application makes the plug-in simple to use and accessible really does depend on your application, but as promised here are  some pointers :
  • The register() method on the plugin-class could be used to return information on which menus etc this plugin should appear - or even if a new menus, toolboxes etc should be created to allow access to this plugin. 
  • In most cases the plugin class should also allow itself to be instantiated, and each instance may well be in a different state at any given time. The class therefore will need to implement methods to allow the the application to use those instances, to change their state etc.
It is up to the application to define the expected interface that is expected of each different BaseClass (i.e. the attributes, classmethods, staticmethods and instance methods). This definition should be clearly documented.