Python Contexts in Maya

Python’s contexts are a powerful feature and if you’re writing any kind of pipeline or animation tools, you should know about them.

Maya has a number of useful applications for the context. If you’ve spent any time working in it, you have no doubt noticed the tendency of the global selectionList to change at inopportune times. Many Maya commands have flags to prevent the selection of new objects, but Python gives us a much nicer way. Consider the following:


from contextlib import contextmanager
import maya.OpenMaya as om
@contextmanager
def keepSelection():
    # setup
    sel = om.MSelectionList()
    om.MGlobal.getActiveSelectionList(sel)

    yield

    #cleanup
    om.MGlobal.setActiveSelectionList(sel)

The context is ready to maintain Maya’s selection when running any code in the block.


import maya.cmds as cmds
cube = cmds.polyCube()
cmds.select(cube)
with keepSelection():
    loc = cmds.spaceLocator()
    cmds.select(loc)

print(cmds.ls(sl=1))

If all has gone well, you should see the current selection printed in the scriptEditor: [u’pCube1′, u’polyCube1′]. The original selection is restored when the code within the ‘with’ block completes.

Background Info

If you’ve ever used the ‘with’ keyword, you’ve used a context. If you want to know why they’re cool, you need to know a bit about resource acquisition is initialization. The key concept is the acquisition and release of resources tied to object lifetimes. The resources in question are things like memory, file descriptors – anything that is limited and used by the computer. File descriptors are the classic example, as operating systems have limited numbers of descriptors that are allocated when files are opened for reading and writing. In Python, files are opened with the open keyword:


def getData(fileName):
    fl = open('myFile.txt')
    data = fl.read()
    fl.close()
    return data

Open and shut, except for a couple of things. If there is an error when reading the data, fl.close may not get called – or one could forget to call close(). This means that the file descriptor will remain open until the Python application space is shut down. If you’re running this code in Maya, that could be quite a while. If the function returns before close() is called, the file remains open.

The RAII idiom ties the acquisition of the file descriptor to the lifetime of an object. This works great in C++, where objects live and die in very predictable ways. For example, in C++ you can open a file with std::ifstream:


#include <string>
#include <fstream>
#include <streambuf>

int accessFile(const std::string& name, std::string& data) {
    data.clear();
    std::ifstream ip(name.c_str());
    if(ip) {
        data.assign((std::istreambuf_iterator<char>(t)),
                 std::istreambuf_iterator<char>());
        return 1;
    }
    return 0;
}

When accessFile goes out of scope, ip is deleted, and it’s deletion causes the file handle to be released. Even if an exception occurs, when the ip instance is destroyed by any means, the file closes.

In Python and other garbage collected languages, the lifetime of ip is not clear. When the Python function returns, ip is unbound, but the data structure it pointed to exists until the Python garbage collector deletes it. As a result, even if the deletion of the Python file object released the file descriptor, it would do so at an unpredictable time compared to the C++ version.

Enter the Context

The context is an object that implements an __enter__() and an __exit__() method – these methods provide initialize and cleanup functions when the code enters the scope of a ‘with’ statement. As these are functions that get called, it is known where and when they get called.

Now the proper way to open a file object for reading or writing is to use the ‘with’ statement:


with open('myFile.txt') as ip:
    data = ip.read()

The file object context protects against exceptions – so the file handle is guaranteed to close when the context closes no matter what.

The contextlib.contextmanager decorator, as used in the keepSelection context above. The decorator defines an iterator that performs the setup work, then ‘yields’ a result (save selection yields None). When the ‘with’ statement is exited, the code after the yield statement is executed.

For a simpler, non-Maya example, you can try the following:


@contextmanager
def testContext():
    print('Enter')
    yield
    print('Exit')

with testContext():
    print(' Printing')

# Result
Enter
 Printing
Exit

Handling Exceptions

The contextmanager generator is not exception-proof by default, but often it needs to be able to handle itself. Consider the previous example, but an exception is thrown in the ‘with’ block:


@contextmanager
def keepSelection():
    # setup
    sel = om.MSelectionList()
    om.MGlobal.getActiveSelectionList(sel)
    raise RuntimeError
    yield

    #cleanup
    om.MGlobal.setActiveSelectionList(sel)

Run this with the cube/locator code from earlier, and the runtime error will show up in the scriptEditor, and the locator will be selected, meaning the cleanup code did not run. This may or may not be desirable, but here’s how to handle exceptions so you can make the call:


@contextmanager
def keepSelection():
    # setup
    sel = om.MSelectionList()
    om.MGlobal.getActiveSelectionList(sel)
    raise RuntimeError
    try:
        yield
    finally:
        pass

    #cleanup
    om.MGlobal.setActiveSelectionList(sel)

Now the cleanup will run no matter what exception is encountered. The exception will still raise – code can be added to catch the exception based on need.

I am planning to do a follow up post with some more info and hopefully useful decorators. In the meantime, I’ll leave you with one I find interesting – it’s not a full implementation, but a decent start. This one makes raising exceptions optional, should they occur:


@contextmanager
def tempNamespace(ns, stopOnError=False):
    cur = cmds.namespaceInfo(cur=1)
    cmds.namespace(set=':')
    cmds.namespace(add=ns)
    cmds.namespace(set=ns)
    try:
        yield
    except Exception,e:
        if stopOnError:
            raise
    cmds.namespace(set=':')
    cmds.namespace(mv=(ns, ':'), f=1)
    cmds.namespace(rm=ns, f=1)

 

Advertisements
Python Contexts in Maya

Iterator Generators and Maya

Iteration is a fundamental concept of computer science, and Python’s iterator generators make it a snap to organize and re-use iteration. I find that I am frequently iterating through large hierarchical structures, both ‘up’ and ‘down’ them looking for nodes with specific qualities.

In Maya, this is often iterating through the DAG, which is Maya’s version of a scengraph. Each level of the DAG can have one parent (for transforms, not shapes) and multiple children. If you are traversing the DAG looking for items, say in a character’s skeleton, Python iteration could be what you’re looking for.

In PyMel, you can grab a PyNode and ask for its parent with getParent(). I find, however, that I often need to iterate farther up the DAG looking for an ancestor that has a particular name or attribute. This could be accomplished by multiple calls to the getParent method of each node, but that can be cumbersome to do in code, and is harder to do in when using list comprehensions. Fortunately, a simple iterator generator can be whipped up in no time:


def parentIter(pnode):
    p = pnode.getParent()
    while p:
        yield p
        p = p.getParent()

Now we can hand this a PyNode (or anything with a getParent method) and it’ll run all the way up the DAG. This is handy if you have a schema where you are parenting a character’s skeleton under a root transform.

for t in parentIter(PyNode('someJoint')):
    if t.type() == 'transform':
    # do something here

This is pretty useful – I often will add a bit of sugar to make life easier. Consider a function that expects to operate on a character – it takes any node that’s part of the character, but needs to get some info off of the root node. Often, you’d need to check to see if the caller passed in the root node itself, or any of the joints. You would need something like this:


def getRoot(node):
    if node.type() == 'transform' and node.hasAttr('info'):
        return node
    for nd in parentIter(node):
        if node.type() == 'transform' and node.hasAttr('info'):
            return node

In order to prevent duplicating the testing part (seeing if a node is a transform and has an attribute called info), the parentIter function can be modified as follows:


def parentIter(pnode, inclusive=False):
    if inclusive:
        yield pnode
    p = pnode.getParent()
    while p:
        yield p
        p = p.getParent()

Now, the iteration can be a bit simpler:


def getRoot(node):
    for nd in parentIter(node, inclusive=True):
        if node.type() == 'transform' and node.hasAttr('info'):
            return node

Eliminating those extra lines can reduce visual clutter and potential error. Also, if you need to update the line that does the filtering, you only have to change it in one place.

Child iteration can be just as simple if you don’t care about the order and you don’t need to skip branches of a hierarchy:


def childIter(pnode, inclusive=False):
    if inclusive:
        yield pnode
    for ch in pnode.getChildren():
        yield ch
        for gch in childIter(ch):
            yield gch

Again, the inclusive keyword is used to simplify the iteration. I usually keep the keyword optional as it seems to blur the line between what you might expect ‘Give me all the nodes under x‘ as opposed to ‘Give me x and all the nodes underneath it‘. I find in my work plenty of use cases for either, so keeping the arg as a convenience seems efficient and clear.

One thing to note about these generators – if you need a list of everything the generator would return, you need to use a list constructor to capture it, i.e.:


# get all the nodes that are ancestors in a list
list(parentIter(node))

Failure to do so will often result in the calling code erroring out with the generator object:


# Error: AttributeError: file line 1: 'generator' object has no attribute 'append' #

 

Iterator Generators and Maya

Iota, Kappa… Lambda!

Those new to Python can stumble on the humble lambda. It does seem odd, with it’s odd little syntax and it’s wee beady colons. What does it mean? Why does it look so odd? Let’s shed some light on it, shall we?

The lambda comes from Lambda Calculus, and is an integral part of two languages I claim to love, LISP and Scheme. Python took all its best stuff from these languages, and so it is only natural that lambda would be pilfered as well.

Lambda is a function that is not bound to a name. Most functions in Python follow this form:


def myFunction():
    pass

When you enter this code into the Python interpreter, it sets aside a string ‘myFunction’ and binds it to a piece of compiled code. Python is, in essence, a giant dictionary made up of other dictionaries, that map strings to pieces of code, or values, etc. Lambda is used when you want to manipulate small functions that don’t need cumbersome names and docstrings – small, temporary functions.

Much of the work lambda was doing has been replaced by list comprehensions. Before those, Python users relied on functions like filter(). For example, to get a list of even numbers from a list, one would could do the following:


>>> filter(lambda x: x % 2 == 0, xrange(9))
[0, 2, 4, 6, 8]

Much more convenient than writing out an isEven function, especially if you are only using it in one or two places. You can see how the list comprehension is similar, and, one could argue, cleaner:


>>> [x for x in xrange(9) if x % 2 == 0]
[0, 2, 4, 6, 8]

If you’ve messed around with lambda, you are probably aware it is not like a normal function. Lambda is an expression, which means you can’t declare variables, do complicated control statements or use a return statement. With lambda, the result of the function is immediately returned. Consider a simple example:


>>> addFive = lambda x: x + 5
>>> addFive(7)
12

Notice, the result of x + 5 is immediately returned – no assignment, no return statement. Lambda is often used to bind arguments to functions dynamically, where the result of the function is returned immediately.

I often use lambda as a predicate for a function that acts like filter(). Consider a function that find all files for which a predicate returns true:


import os

def find(root, pred=lambda x: True):
    root = os.path.abspath(root)
    result = []
    def _tm(dr, pred, output):
        for i in os.listdir(dr):            
            pl = os.path.join(dr, i)
            if pred(pl):
                output.append(pl)
            if os.path.isdir(pl):
                _tm(pl, pred, output)
    _tm(root, pred, result)
    return result


if __name__ == '__main__':
    for t in find('c:/Users/user/Documents/maya', pred=lambda x: x.endswith('.mel')):
        print(t)

The lambda is a default argument that simplifies the coding – you can always just call the predicate function in the code, so less branching and possible code errors.

It can be used for binding partial arguments, but functools.partial handles that and is a bit more conventional Python.

Lambda is useful in conjunction with Python’s various sort methods where it can serve as a key or cmp function – this is handy when, say, sorting the items in a dict by value:


dct = {'test_a' : 3,
       'test_b' : 1,
       'test_c' : 2}

>>> for k in sorted(dct.items(), key=lambda x: x[1]):
...     print(k)

('test_b', 1)
('test_c', 2)
('test_a', 3)

Lambda and Class Properties

Python properties are an area where lambda can play a role. Consider properties inherited by derived classes:


class A(object):
    @property
    def data(self):
        return 'A'

class B(A):
    def data(self):
        return 'B'

The flaw in this becomes evident when the property is accessed.


>>> a = A()
>>> print('a: %s' % a.data)
a: A

>>> b = B()
>>> print('b: %s' % b.data)
b: <bound method B.data of <__main__.B object at 0x00000000023B15C0>>

B’s data method needs the @property decorator to work, but that seems to be non-intuitive with regard to how inheritance works. The property still doesn’t give me what I’m looking for:


class A(object):
    def getData(self):
        return 'A'

    data = property(getData)

class B(A):
    def getData(self):
        return 'B'


>>> b = B()
>>> print('b: %s' % b.data)
A

In this situation, lambda comes through:


class A(object):
    def getData(self):
        return 'A'

    data = property(lambda self: self.getData())

class B(A):
    def getData(self):
        return 'B'

>>> b = B()
>>> print('b: %s' % b.data)
B

This does the trick – because the property is bound to the class, the first method passed to the class is self, and self is an instance of the derived class, so the derived getData method will be called. The property can be updated to call a derived setter, as well, if there was a setData method.

The Moral of the Story
Lambda is useful for creating simple functions that evaluate expressions that can be used to perform simple calculations or compose other function calls on the fly. While many of its most common uses have been supplanted by list comprehensions, it is still a handy tool to have in the toolbox.

Iota, Kappa… Lambda!

Emulating PyQt Signals With Descriptors

This is a longer post than usual, but it’s pulling several different topics together.

While working on a personal project, I felt the need to mock up PyQt signals. If you’re not familiar with signals, they are an implementation of the observer pattern used to pass events in the Qt system. In the C++ implementation, they are very powerful and expressive, as the Qt MOC does a lot of work to make them easy to use and overcome a lot of work you’d need to do to get flexible callbacks in pure C++. In Python, they are not as cool, only because Python’s dynamic binding and duck-typing make implementing arbitrary callbacks a snap.

To use a signal in PyQt, you instantiate a QObject with a signal and connect it to a slot on another QObject. All of the Qt widgets come with signals, and often you will use them or implement your own. A sample custom signal/slot system is seen below:


from PySide import QtCore

class MySource(QtCore.QObject):
    mySignal = QtCore.Signal(object, int)
    def __init__(self, parent=None):
        QtCore.QObject.__init__(self, parent)

class MyDest(QtCore.QObject):
    def __init__(self, parent=None):
        QtCore.QObject.__init__(self, parent)

    def mySlot(self, obj, i):
        print('Object: %s, Int: %s' % (obj.__class__.__name__, i))

if __name__ == '__main__':
    a = MySource()
    b = MyDest()
    a.mySignal.connect(b.mySlot)

    a.mySignal.emit(a, 12)

Exec’ing this file yields:


Object: MySource, Int: 12

I was working with PySide and getting some errors, so I decided I would make classes that would function like PyQt objects with signals, get the code working, then swap out the underlying objects with their Qt equivalents. I’ve always liked Qt’s signals, and ended up using the Python implementation a fair amount in my project, and thought it would be a good exercise to create them from scratch.

To emulate the calling style of a PyQt signal, the Python descriptor is a natural choice. Descriptors normally get or set a value. In this case, we want the __get__ to return an object that has a connect method that will create the signal connection. The connection is an entry in a WeakKeyDictionary. The weak key dict will cleanup the signals when an object is deleted by the Python garbage collector – this prevents signal connections from keeping an object from being deleted.

The _Sig object is responsible for mapping the class instance to the target instances. It stores the args of the signal, which would be the types that are going to be sent when the signal’s emit function is called. WeakKeyDictionary is used throughout to prevent circular references from keeping objects alive – this is strictly an observer pattern.

It would be nice to keep a reference only to the methods, but I was unsuccessful storing those in a WeakSet, so the object itself is needed. The _Sig class will only work with class instances, although it should be possible to adapt to trigger static functions.


import weakref,inspect

class _Sig(object):
    """Maps objects to create Signal connection."""
    def __init__(self, pr, args):
        """
        Args:
            pr (object): An instance of a class.
            args (list or tuple): A sequence of arg types.

        Attributes:
            _ob (object): Instance of the class that would be the
            sender.
            _args (list or tuple): The sequence of arg types to use
            for the Signal.
            _conn (WeakKeyDictionary): Maps the target item to the
            target method.
        """
        self._ob = pr
        self._args = args
        self._conn = weakref.WeakKeyDictionary()

    def emit(self, *args):
        """Emits the signal

        Args:
            The values to emit - the types should match _args.

        Note:
            If _ob has a method called 'signalsBlocked', it will be
            called. If the method returns True, the signal is not
            emitted.

        Raises:
            TypeError: If one of the emitted args is not an instance
            of _args, TypeError is raised.

        """
        if hasattr(self._ob, 'signalsBlocked') and self._ob.signalsBlocked():
            return
        idx = 0
        for ik,k in zip(self._args, args):
            if not isinstance(k, ik):
                msg = 'Pos %s: Expected %s, got %s' % (idx, ik, k)
                raise TypeError(msg)
            idx += 1

        for val in self._conn.values():
            for ck in val:
                ck(*args)

    def connect(self, method):
        """Create connection to method.

        Args:
            method (instanceMethod): The 'slot' method to be called
            when Signal is emitted.

        Raises:
            TypeError: Method must be a method on a class instance.

        """
        if not inspect.ismethod(method):
            raise TypeError('Expected method, got %s' % method.__class__)
        self._conn.setdefault(method.__self__, []).append(method)

    def disconnect(self, method):
        """Disconnect a method from the signal.

        Args:
            method (instanceMethod): The 'slot' method to be disconnected.

        """
        current = self._conn.get(method.__self__, [])

        current = [x for x in current if x.im_func is not method.im_func]

        assert(method not in current)
        if current:
            self._conn[method.__self__] = current
        else:
            self._conn.pop(method.__self__, None)

    def getOutputs(self):
        """Get a list of output methods.

        Returns:
            A list of methods that will be triggered when this signal
            is emitted.

        """
        result = []
        for val in self._conn.values():
            result.extend(val)
        return result

By itself, the _Sig does nothing – it needs a descriptor to access it. The Signal descriptor uses a __set__ method that raises a RuntimeError, which keeps the user from assigning a value to the attribute and thus overwriting it.


class Signal(object):
    """The actual Signal object.

    This descriptor does the work of providing access to the
    underlying _Sig object.

    Attributes:
        _map (dict): Map of Signal instances to WeakKeyDictionaries.

    """
    # Map this signal instance to some data.
    _map = {}

    def __init__(self, *args):
        """Initialize the Signal

        Args:
            List of types - these types are used to match the types
            emitted by the Signal.

        """
        self._args = args

    def __get__(self, obj, typ=None):
        """Return a _Sig object mapped to obj.

        We use the signal as the key so that other signals do not
        collide. Each Signal instance is unique and located in a class
        definition.

        A weak key dict maps the actual instance to the _Sig object.

        Args:
            obj (object): Instance of for the descriptor.
            typ (class): The owner.

        Returns:
            A _Sig object initialized to obj.

        """
        tmp = self._map.setdefault(self, weakref.WeakKeyDictionary())
        return tmp.setdefault(obj, _Sig(obj, self._args))

    def __set__(self, obj, val):
        """Raise an exception.

        This prevents a user from setting an attribute on the instance
        that would replace the signal.

        Args:
            obj (object): Instance of class.
            val (object): Value to set to.

        Raises:
            RuntimeError: Always raise this.
        """
        raise RuntimeError('Cannot assign to Signal')

To use the signal, put it in a class definition and connect it from an instance of the class. Notice that Python class instances should be new-style classes derived from object.


class Source(object):
    test = Signal(int, float, object)

class Target(object):
    def slot(self, i, f, o):
        print('slot(%s, %s, %s)' % (i, f, o))

src = Source()
trg = Target()
src.test.connect(trg.slot)
src.test.emit(12, 33.4, src.__class__.__name__)

Executing the code above yields:

slot(12, 33.4, Source)

This implementation of the Signal checks the types of the args sent, which was it’s goal since it is emulating a Qt style signal. Different implementations could omit the type-checking, although I find it handy in larger systems. There is no concept of the sender() method – this is a method on QObjects that interfaces with the Signals. It would be possible to add the sender as an argument when the signal is emitted. For Python, one could also add optional keyword arguments. That will be left as an exercise for the reader.

There is also no checking for cycles – one could create an endless cycle of signals to slots. I may address this in a future post if it comes up.

Emulating PyQt Signals With Descriptors

Visual Python Error Reporting

3d pipeline application development for a studio seems very much like developing and maintaining web services for a website. You have a large user base and you do not expect those users to be very technically inclined. Animators are, after all, necessarily more interested in creating interesting performances than the somewhat obscure minutiae of node graphs and connection types. This is not to say there are not technical animators out there – there are many very good ones, to be sure. A large studio, tho, invests some time and money in providing more technical support to maximize animator performance time and minimize cursing time. We have, as yet, no solution to minimize animator drinking time, however.

The tricky part is that modern 3d application software provides almost too many options for the non or semi technical to cause mischief. The design and implementation of animation tools is a constant battle to anticipate an infinite number of states an artist’s animation scene can be in.

So when, in the course of human events, it becomes necessary for animation tools to fail, it would be optimal if they were to fail in a useful way. Many artistic types pay no attention to the red-flash of the status bar familiar to many TDs (if they even have it displayed), and the Script Editor is as foreign as Chrome’s developer console.

Fortunately, if you’re using Python, you have some tools available to help with better error reporting. A clear winner in this area is the decorator. Python decorators wrap functions and re-bind them in the Python interpreter, allowing the re-use of code with minimal intrusion. One practical application of this is a decorator that can provide useful information about an Exception.


from functools import wraps

def displayException(f):
    # copy function info to decorator
    @wraps(f)
    def _wrapped(*args, **kwargs):
        try:
            return f(*args, **kwargs)
        except Exception,e:
            # try to import PySide - if you can't, bail            
            try:
                from PySide import QtGui
                # check to see if the application is running - if not,
                # don't launch ui
                if QtGui.QApplication.instance():
                    tmp = QtGui.QMessageBox.critical(None, 'Python Exception', str(e))
            except ImportError,e:
                pass
                
            # allow the exception to propogate up
            raise
    return _wrapped

Now you can use this to wrap any function you have and, if it fails, it will at least it will present a error dialog to the user. I recommend using it at only at your higher-level entry point functions. If you use this in lower-level loops, you run the risk of popping up hundreds of these.


@displayException
def myMainFunction():
    """This does some big operation - like building a rig or publishing a
    file
    """
    pass

This code uses functools.wraps to update the returned decorator with the function’s doc string and other attributes. Also, I check to see if PySide can even be imported – my use case for this would be Maya, but this will work in any Python code where PySide may or may not be available. Also, the code checks to see if the QApplication is running – this is good practice, as launching a UI when Maya is running in batch mode could be a bad idea.

Visual Python Error Reporting

Proper Parenting

I’m a stickler for proper object management. I started my object hierarchy building with boost::shared_ptr in C++, and was really taken with Resource Acquisition Is Initialization (RAII) in terms of memory management. My first foray into hierarchical structures yielded a simple pattern: an object in the structure would have a weak pointer to it’s parent and strong pointers to its children.

This pattern should be fairly obvious: when an object is deleted, its children should be deleted also. This requires that the parent own its children. When it comes to the parent, however, if an object owned it’s parent as well as being owned by it, why, that would be absolutely crazy. Imagine how bizarre the custody battles would be. This is called a circular reference, and it is an issue in compiled languages. To prevent it, the child has only a weak reference to its parent, keeping order in the digital family.

In C++, the boost pointer libraries made this very easy:


// This is not the complete code for a class
class Item {
public:
    typedef boost::weak_ptr<Item> WeakPtr;
    typedef boost::shared_ptr<Item> SharedPtr;
    typedef std::vector<SharedPtr> ChildArray;

private:
    WeakPtr _parent;
    ChildArray _children;
};

In Python, every pointer is like the shared_ptr – that is, while the object is being pointed to, it is never garbage collected. When I began working hardcore in Python, I found a number of object hierarchies that used standard Python pointers to maintain parent/child relationships. In older versions of Python, this would result in memory leaks and slow performance. Python has been updated to deal with circular references, but I regard it as poor programming practice, inviting trouble. But how to avoid it?

Enter the weakref module. It allows the creation of weak pointers in Python. To implement the Python version of the C++ class, it can be used to handle the parent:


import weakref

class Item(object):
    def __init__(self, parent=None):
        self._parent = weakref.proxy(parent) if parent else None
        self._children = []

    def getParent(self):
        return self._parent

    def getChildren(self):
        return self._children

Now we’re talking – saving Python from itself, maybe saving the world – depends on what you’re writing. Regardless, you can rest assured you won’t introduce any leaking or extra CPU lag into your application.

Proper Parenting