Tuesday, February 16, 2010

Building better API documentation

A while back, over on the Cogi blog, James announced the public beta of our new public remote API. Designing and implementing this API has been a major focus for our team over the last several months. We've taken our several private interfaces, and refactored and reimplemented them into a single interface that we want to expose to our customers and partners to allow them to develop applications that take advantage of the rich backend systems we've built over the past couple of years.

Now, one of the things with releasing a remote API with consumers beyond your own applications is that the API itself becomes a kind of contract between your service and the new consumers -- a contract that you must both present and maintain. In this post, I'd like to talk about the primary mechanism through which we present our API -- the API documentation.

There is a whole laundry list of things that can go wrong with any API documentation -- for starters, let's consider just this small (and incomplete) list of problems:
  • Stale documentation
  • Missing documentation
  • Documentation that's difficult to interpret
An interesting take-away from this is that both of the first two points are related to the documentation not matching the interface that's actually deployed on the server -- that is, if we had a way of connecting that documentation to the service itself, those problems would be less likely to become a problem. So, we decided to introduce the concept of an API "demo" as our documentation.

So, rather than merely describing the interface we provide and expect in some separately maintained documentation, our API documentation actually shows you what to do -- how the call is formed, and what the response looks like. What's more, in the process, it actually demonstrates that the API is actually working as described, because the demo actually makes a call into the API itself! In fact, the API documentation is essentially an automatically generated client of the API itself!

Whats more, since the API documentation is dynamically generated at runtime, we never have to worry about it being out of date, since what is displayed in the documentation exactly matches the code the API server is running from -- that includes the method interface, method documentation, and response API.

Want to try it out? Contact us at api@cogi.com to get an API key and start developing against the Cogi API! Have other thoughts? Leave them in the comments below!

Thursday, February 4, 2010

Python Tracing Performance

Looking back at my introductory post, I realize I didn't mention that I am currently working on my M.S. degree in Computer Science at UCSB. Occasionally, my academic work leads me to discover something immensely applicable to my industrial work -- I recently encountered one such case that was fairly game changing in terms of how I intend to use Python for large processing tasks.

The details of the particular assignment I was working on aren't pertinent to this discussion. Suffice it to say that I needed to perform some fairly complex processing on a fairly large graph of nodes. For the purposes of this discussion, I was particularly interested in some of the relationships between many of the nodes.

Once the majority of the code was written and tested, I attempted to run it on the large dataset, and found that while progress was being made, it was being made very slowly, taking upwards of 10 minutes to complete one particular portion of the processing. Naturally, I dug back into the relevant code, searching for where I had allowed some O(N^2) code to be introduced into what was supposed to be a O(N log N) algorithm. Much to my dismay, I couldn't find any area where I had allowed such an obvious mistake to enter my code (well, partial dismay -- I was actually glad I didn't screw it up).

Once I was satisfied that there were no major algorithmic mistakes, I took steps to begin optimizing out any of the constant-time operations that could possibly be removed. I pushed many of the operations into other areas of the code where I was already manipulating the object in question, improved some of the layout to avoid re-examining the same object, etc. However, none of these optimizations resulted in any worthwhile performance gains.

Up until now, I hadn't given my trace routines much thought, as I was simply using them to help me debug the problems in question, but even with the trace disabled I was still finding the code extremely slow to run... then it hit me -- even when my trace function was "disabled", I still had to call the function in question. So on a whim I commented out all of the calls to my trace function, and ran the algorithm again... and before I could lean back in my chair to wait, the results popped up on my screen. I was shocked -- from 10 minutes to less than a second, just by removing my "disabled" trace lines.

Now, I've been writing Python code for a while -- I'm fairly consistent in sticking to the performance patterns mentioned in the Python wiki and working with strings efficiently but I was amazed by how much this one little change affected my program.

As I thought about the problem in particular, I realized it made sense: function calls in Python are fairly expensive, so even if you're only doing O(N) of them, if N is large enough, that still adds up to a lot of function calls. So, I decided to develop a test module to demonstrate the various simple tracing routines I could quickly write down. Here's the code for the testing module:


'''
Demonstrate the performance impacts of various log/tracing techniques.

The tests in this module do not test the performance of various logging
methods while logging is enabled, as we likely don't mind the overhead 
of checking to see whether the statement is enabled or not when we're 
actually writing to sys.stdout or a file (the I/O overhead is ususally 
going to be much larger than even a few function calls).

Instead, these tests focus on the running code while it contains 
*disabled* log statements to see how that unused code is ultimately
affecting program performance.
'''
from __future__ import print_function
from logging import getLogger
import timeit

TRACE_MESSAGE_ARGS = tuple(range(5))
TRACE_MESSAGE_FORMAT = 'Traced message: %s' % ', '.join('%s' for _i in TRACE_MESSAGE_ARGS)



def method_0():
    '''Provide a baseline that simply comments out unused trace.'''
#    print TRACE_MESSAGE_FORMAT % TRACE_MESSAGE_ARGS
    pass


METHOD_1_ENABLED_LEVELS = set([
    'FOO',
    'BAR',
])



def method_1a():
    '''Use a trace method that checks for an enabled level.
    
    This is the typical simple trace mechanism. Using a set() for the
    level definition helps avoid unnecessary list scans, but two things
    are wrong with this approach:
        
        1. The string interpolation must be performed every iteration.
        2. We have to call the method_1_trace function every iteration. 
    '''
    method_1a_trace('NOT_ENABLED', TRACE_MESSAGE_FORMAT % TRACE_MESSAGE_ARGS)

def method_1a_trace(level, message):
    if level in METHOD_1_ENABLED_LEVELS:
        print(message)



def method_1b():
    '''Similar to method 1a, but don't force the string interpolation every iteration.'''
    method_1b_trace('NOT_ENABLED', TRACE_MESSAGE_FORMAT, *TRACE_MESSAGE_ARGS)

def method_1b_trace(level, message, *formatArgs):
    if level in METHOD_1_ENABLED_LEVELS:
        print(message % formatArgs)



def method_1c():
    '''Similar to method 1a, but don't force the function call every iteration.'''
    if 'NOT_ENABLED' in METHOD_1_ENABLED_LEVELS:
        print(TRACE_MESSAGE_FORMAT % TRACE_MESSAGE_ARGS)



METHOD_2_LEVEL = False

def method_2():
    '''Use the trace level as a boolean guard.
    
    The concept here is to avoid executing a function call, or a string 
    interpolation step at all in the case where the level is disabled.
    
    This appears to be the fastest mechanism available short of actually
    removing the trace code from the program entirely (via a comment or 
    otherwise).
    '''
    METHOD_2_LEVEL and print(TRACE_MESSAGE_FORMAT % TRACE_MESSAGE_ARGS)

METHOD_2_LEVEL_FOO = 'foo'
METHOD_2_LEVEL_BAR = 'bar'
METHOD_2_LEVEL_DISABLED = 'disabled'

METHOD_2_ENABLED_LEVELS = set([
    METHOD_2_LEVEL_FOO,
    METHOD_2_LEVEL_BAR,
])

def method_2b():
    '''Mix of method_2 with enabled levels.'''
    METHOD_2_LEVEL_DISABLED in METHOD_2_ENABLED_LEVELS and print(TRACE_MESSAGE_FORMAT % TRACE_MESSAGE_ARGS)


method_3_logger = getLogger(__name__)
method_3_trace = method_3_logger.debug

def method_3():
    '''Use the built-in logging mechanism.
    
    This was included somewhat as an afterthought, since we've already
    shown that function calls are the major overhead to avoid. 
    
    It turns out this is by far the worst performing option available!
    '''
    method_3_trace(TRACE_MESSAGE_FORMAT, *TRACE_MESSAGE_ARGS)



def main():
    print('Method 0: ', min(timeit.Timer(method_0).repeat()))

    print('Method 1a: ', min(timeit.Timer(method_1a).repeat()))
    print('Method 1b: ', min(timeit.Timer(method_1b).repeat()))
    print('Method 1c: ', min(timeit.Timer(method_1c).repeat()))

    print('Method 2: ', min(timeit.Timer(method_2).repeat()))
    print('Method 2b: ', min(timeit.Timer(method_2b).repeat()))

    print('Method 3: ', min(timeit.Timer(method_3).repeat()))



if __name__ == '__main__':
    main()

On my machine here at work, the output of running that module looks like this:
Method 0:  0.203303173335
Method 1a:  1.00159193476
Method 1b:  0.559357103269
Method 1c:  0.222934412164
Method 2:  0.193546388001
Method 2b:  0.244648784973
Method 3:  5.09330805922

Method 0 provides a simple baseline for the test -- the fastest run of the test with no code takes .2 seconds. Given that we're running the test method a million times (the default for the timeit.Timer.repeat method), that seems reasonable for Python.

The Method 1{a,b,c} results represent variations on a simple trace(level, message) metaphor that I've frequently used to do simple tracing. As you can see, dropping the string interpolation from 1a in test 1b cuts the time nearly in half, and dropping the function call in test 1c drops it nearly in half again. The point to take away from this is this: a trace function really can't afford to do any work while it's disabled!

Method 2 turns out to be my preferred choice for simple tracing now -- just guard a print statement with a boolean condition that's either enabled or disabled. Note, it's important that the object be simple to evaluate in a boolean context (e.g, True, False, 0, 1, etc) -- it wouldn't do any good if we had to check a complicated logic function to see if we were supposed to get past the guard. Method 2b is a simple variation on this theme that tries to combine the efficiency of method 2 with the convenience of method 1*. However, the slight degradation in performance doesn't seem worth it to me.

Astoundingly, using Python's provided logging module provided the worst performance by far! But, when you think about it, the logging module in concept is similar to the 1a method, but has to do a good deal more work to check it's guard condition. And in fact, judging from these results, it looks like it's about 5 times as much work!

There seem to be a couple of lessons we could take away from this brief analysis:

  1. If possible, be sure not to call any functions when checking a guard condition for a trace method -- function calls in Python are too expensive to be executing just to prevent printing. (Note that this includes any dot-operator accesses, as expressions like "a.b" are essentially just method calls like "a.__getattr__(b)", with some additional magic applied.
  2. If you must use something like the logging module or a level-checking function for trace, only include trace where absolutely necessary, and never inside frequently executed loops. If you need the trace there for debugging, be sure to comment it out when you've finished your investigation.

Tuesday, January 26, 2010

A developer's perspective on iPhone applications

I'm proud to announce that Cogi has joined the rest of the world and now has an iPhone application! We believe that we are uniquely suited to provide the best available recording application, for a variety of reasons. For more information on the app itself, check out its FAQ page. But convincing you of the merits of that app (of which I believe there are many), isn't the reason for this post. What I want to discuss in this post is the iPhone application development process.

To set the stage, let me note that I've worked on several applications, across many programming languages, with quite a few different development environments. However, after several weeks of development, I have come to the conclusion that iPhone development is quite possibly the worst environment for development of an application I have come across yet.

Okay, you're hooked now, right? What am I going to say that's bad about XCode, Interface Builder and Objective C? Well, let's set that aside for the moment and discuss what's good about the environment first:
  1. XCode's refactoring tool may be the best I have ever seen -- just right click on a symbol, provide a name, click preview and apply. It even integrates with Subversion nicely, so I don't have to worry about renaming the files in Subversion first. The only thing that I haven't been able to get XCode to refactor properly was a category I applied to a built in class, which is understandable.
  2. The provided iPhone simulator is great! It's a great way to build and test without an actual iPhone connected to your computer.
  3. There is an amazing amount of detailed documentation. (If you can find it, more on that below.)
Unfortunately, that's about all I see in this environment that stands out from the crowd -- anything else is either merely par for the course, or much worse than everything else.

So, what's so bad about it? Here are the highlights:
  1. Objective C. In order to build an application of any complexity, Apple has forced developers into learning and developing with Objective C. Now, I know there's a good deal of history there, but this is a language that has roots in the 60's, and it shows. There is so much cruft just to get a simple class in place that I cringe, and often want to avoid proper OO design, and avoid good testing practices, just because of the nasty overhead involved. The square bracket message passing syntax is a nightmare to type out and read, and difficult-at-best to indent properly for multi-line messages. Finally the ridiculously long names and function calls with arguments leave you with no choice but to have either strange linebreaks in your code or to have lines that are 200+ characters long.
  2. XCode. I see things online all the time about how good XCode is, so maybe there's something wrong that we're doing here at Cogi, but we all hate XCode. In particular, it doesn't help you with navigating around many open files, and wants to impose it's idea of "groups" on top of your particular file system layout. You actually have to jump through hoops to create a hierarchy of folders that makes sense.
  3. Interface Builder & UIFramework. Why can't you set anything on an object in Interface Builder that you can do programatically? Why aren't UITextField and UITextArea related objects?1 This area in particular is particularly difficult to get to work at all until you've read all of the hundreds of pages of documentation that Apple has made available.2
  4. No way to simulate phone calls. As far as we've been able to find, when running your code in the iPhone simulator, there is no way to simulate either incoming or outgoing phone calls. In the real world, applications have to deal with this -- with no way to test this in a simulator, this requires a full build, deploy, test, and place a phone call cycle, just to discover that your change may or may not work as you expected.
And this is all assuming you've already jumped through all the hoops Apple requires you to go through just to start developing on a phone. And that you have a pretty good understanding of the hundreds of pages of documentation required to "get" how Apple wants you to develop applications for the iPhone. And we're assuming you already have a Mac available to do the development (really, there's no good reason to force development in XCode on a Mac -- just put out an SDK that can be used from Windows already!).

So, all that said, where do we go from here? Well, at Cogi, we are continuing to develop an outstanding iPhone application that our customers will love -- us developers will just have to continue to suffer through some amount of pain and frustration along the way.



1. This issue is particularly frustrating. So much so, that I will likely be writing more about it in the coming weeks.
2. Apple could learn a lot from the Pythonism: "Simple is better than complex." (from the Zen of Python).

Friday, December 4, 2009

Introducing me!

Hi, I'm Chris.

I'm a software developer and for the past year or so I've been working at Cogi. Prior to joining Cogi, I had worked with a couple others on the team, and was really excited to be working with them again in a fast paced, dynamic environment focused on developing high-quality software written in Python.

Most of my development background is in C++ and Python, though I've been known to dabble in Flex, Java, Lisp, and lately Objective C, plus quite a few others. I tend to enjoy writing supporting libraries (like managing threads or state machines, etc), but also write a good deal of client- and server-based code.

My language of choice is, without a doubt, Python. I've worked with several other languages, and looking back I now see how much time I wasted fighting with them. I love how Python just gets out of my way and helps me get the job done. (Of course there was the initial shock of "what, no curly braces and indentation matters?", but I quickly got over that.) I've been using Python since around 2.3, and I'm loving the continually improving language, right on up to Python 3.x.

I'll be writing entries that mostly focus on working with Python, or some of the various tools and libraries that we use with Python, but I will likely also write entries for other projects and processes that we're involved with here at Cogi.

If you have thoughts or questions, please feel free to ask -- I like to think that I'm pretty good about responding to emails or comments, so fire away!

Tuesday, November 17, 2009

Introducing the Cogi Engineering Blog

Welcome to the new Cogi engineering blog!

As you may have guessed from the title, this blog is maintained by the software engineering department at Cogi. We are a bunch of developers with a wide range of experience that have come together to build what we believe are the "best of breed" recording and transcription solutions for your phone, computer, and mobile devices.

Over the next couple of weeks, several of us (the developers at Cogi) will likely write short introductions about ourselves, explaining what we do at Cogi, what our primary development areas are, and what kinds of topics we are likely to write about. You can expect to see engineering expertise ranging from embedded signal processing to web development, to desktop and mobile client development. Figuring out how to make our solutions be scalable, reliable, correct and robust is one of our many core strengths.

Here at Cogi, we do a wide variety of development work using the Scrum development process. We build client applications, web and telephony servers, and transcription engines in Python, Objective C, C and C++. We also have experience working with SQL databases and C# .NET applications.

We are extremely excited to enter the blogging world, to share our experiences and learn from yours. Please feel free to contact us either via the comment system, directly through our profiles, or via email at devblog@cogi.com.