November 14, 2008

lxml.usedoctest is basically awesome

Hi. I know it’s been a while.

I’m here today to tell you how awesome lxml.usedoctest is. Check out a doctest:

def something_doing_xml_stuff():
    """
    >>> import lxml.usedoctest

    Namespaces are inherited as you'd expect.

    >>> print xml.a(xmlns="uri:a")[
    ...     xml.b[
    ...         xml.c(xmlns="uri:c")[
    ...             xml.d[ "I'm down here at D!" ]
    ...         ]
    ...     ]
    ... ]
    <ns0:a xmlns:ns0="uri:a">
        <ns0:b>
            <ns1:c xmlns:ns1="uri:c">
                <ns1:d>I'm down here at D!</ns1:d>
            </ns1:c>
        </ns0:b>
    </ns0:a>

    """
    pass

Now check some example output when it fails:

----------------------------------------------------------------------
File "tags.py", line 230, in __main__.Element
Failed example:
    print xml.a(xmlns="uri:a")[
        xml.b[
            xml.c(xmlns="uri:c")[
                xml.d[ "I'm down here at D!" ]
            ]
        ]
    ]
Expected:
  <{uri:a}a>
    <{uri:a}b>
      <{uri:c}c>
        <{uri:c}d>I'm down here at D!</{uri:c}d>
      </{uri:c}c>
    </{uri:a}b>
  </{uri:a}a>

Got:
  <{uri:a}a>
    <b>
      <{uri:c}c>
        <d>I'm down here at D!</d>
      </{uri:c}c>
    </b>
  </{uri:a}a>

Diff:
  <{uri:a}a>
    <{uri:a}b (got: b)>
      <{uri:c}c>
        <{uri:c}d (got: d)>I'm down here at D!</{uri:c}d (got: d)>
      </{uri:c}c>
    </{uri:a}b (got: b)>
  </{uri:a}a>

Putting aside the ugly ElementTree-like qname syntax, lxml.usedoctest:

  • Lets me make the XML in my doctest pretty
  • Shows me the expected/actual output pretty-printed as well
  • Goes on to show me a diff! (Maybe not the easiest to read, but it works.)
December 26, 2007

The component system of my dreams

I’ve been working in Plone, so I’ve seen zope.component. I’m also thinking of making a (potentially) networked game in Python, and for that I was looking at things like Twisted and Kamaelia. Unsurprisingly, I’m not really satisfied with any of these.

Let me try to articulate what I’m looking for when I say “component system” (perhaps something more nebulous than “content management system”).

  • Obviously I want to divide my game into components that implement defined interfaces. For example, there’s a component that handles the network communication with players, another that handles streaming the game events to “observers” (people that watch but do not participate), a component to handle physics, a component to perform (for example) validation on incoming player commands, etc.

  • I want to define dependencies between components. These dependencies are then used to acquire a suitable implementation of the component’s interface.

    For example, when I start the game maybe I “start” the front end component that handles communications with player; lets call that the “player server.” Now, the player server generates events such as “player connected,” “player moved,” etc. Somewhere (in code, or maybe even in something like ZCML) I’ve defined that this implementation of the component needs a component implementing IPlayerEventConsumer. The component system then finds (using my configuration) an implementation for that interface and makes it available to the player server.

  • Assuming my components are written correctly, I want to be able to have a component execute in the same thread as other components (e.g. in the “main thread” as a microthread/coroutine), or in a new thread, or in a new process. For example, if I have lots of CPU-intensive physics code, maybe I want to run that on another processor, so I need threads. Of course, maybe I’m running on CPython, where I might need separate processes to bypass the GIL (ignoring for the moment the question of efficient IPC). Or maybe I’m working with something like a GUI where I need to have that run in a separate process (of course, doing a GUI can cause even bigger headaches).

  • I want the ability to implement components in other languages (see also: component executes in separate process, above). This means I want a standard protocol for communication between components. XML-RPC comes to mind. Something else easy enough to implement comes to mind. There’s things like pickling in Python, but I don’t know how much fun that would really be to implement in a non-Python language; maybe something more language-agnostic?

    For fun, I’ll add here that I might like to be able to communicate with components using a variety of methods: pipes, Unix sockets, TCP/IP, and so on. This is desirable.

Now, I think I’ve just described a billion other attempts at “component systems”: COM/DCOM, maybe EJB, CORBA, maybe KDE’s DCOP. Let me add a couple more requirements that should narrow the field a bit:

  • The “component system” must be platform-independent–or at least support including Linux, *BSD, OS X, and Win32 (actually, I could give up Win32 if I had to).

  • I want the component system to be mostly transparent to me, the coder. I expect to have to configure the bindings between interfaces and implementations, specify dependencies, configure the manner in which a component will execute (microthread/thread/process; subject to the “execution style” the component is prepared to run in), and configuring the location (e.g. host/port) of other components in the system. I don’t want to have to manually write up a proxy class for a remote component, for example. As much as is humanly possible, I don’t want my code to care whether a component is running in shared memory or on a box 1,000 miles away.

COM/DCOM is basically going to be Windows, right? DCOP might not be, what with KDE 4 supposedly running on Windows (right?). EJB has notorious boilerplate (though it has been a while for me), when I think CORBA I think IDL, etc. Those aren’t “mostly transparent.”

Kamaelia looks interesting. The system of “wiring” components together feels right to me; I was first exposed to this in NesC. However, the implementation needs to be updated to support the new features of generators in Python 2.5, as the current syntax strikes me as rather ugly. In fact, it looks like Kamaelia needs a recent release, period: the last one I saw was from 2006.

(As a side note: everything should be easy_installable. Kamaelia and Twisted are not, though Twisted has ongoing work to this end.)

Kamaelia also fails to offer multi-process operation, as far as I can tell (you could write it yourself without too much pain), and it needs a way to generalize its “wires” to support communication with, e.g., remote components. You could actually combine the marshalling component with the framing component and a TCP client/server components and make this work; but that might have shot straight past “configuration” into “programming.”

Twisted is a much, much, much larger framework than I ever realized, and a lot of it seems pretty good. Nonetheless, the centerpiece of Twisted (if you believe the docs) still seems to be their “reactors” which require you to handle concurrency by defining things like Protocol subclasses that receive event messages (i.e. method calls) like connectionMade and dataReceived. This programming model might feel a little strange. They’ve got this neat looking inlineCallbacks decorator, which looks like it might lend itself to a coroutine kind of style. Then you start to realize that you’re not sure what you can use it for. I actually started writing something like:

class HelloWorldProtocol (LineReceiver):
def connectionMade(self):
    self.transport.write("Hi, who are you?n")
    # Now I'll read their name:
    line = yield self.readLin  # ... hey, uh, there isn't a read method

I’ve seen several IRC logs where people try to figure out similar things. For what I’m doing, inlineCallbacks doesn’t seem like something I’m going to be able to use much.

I’ve considered building something like Kamaelia’s style of wiring up components inside Twisted. Twisted has some kind of support for things like processes and threads. I haven’t determined if these really meet my needs, and I keep reading scary things about them being deprecated.

If you believe Twisted’s finger tutorial they’ve also drank the “Zope Component Architecture” Kool-Aid, though thankfully I didn’t see any ZCML (yet…). Look at the final product of that tutorial and notice all the interfaces and “adapters” flying around. I don’t really feel like I gain enough for that extra code.

The question I ignored earlier, the one of performance, is still outstanding in my mind: can this kind of system be done [in Python] efficiently? I’m afraid I’m dreaming of a “component system” that’s going to be so slow as to be unpractical.

And, finally: do I really need these features?

December 25, 2007

Linux audio strikes back

Fedora 7’s new Firewire stuff might not be totally together: when I plugged in my DV cam, I couldn’t read the device except (I guess?) as root. (Kino also kept crashing, and I ended up just using dvgrab.)

Apparent side-effect of running sudo kino: some shared memory used by dmix became owned by root and mod 0600. Thus when you run something like, say, aplay (ALSA configured to use dmix by default), you get “unable to create IPC semaphore” (among some other lines).

My fix:

  1. grep ipc_key /etc/alsa/*. In my case, I see something like /etc/alsa/alsa.conf:defaults.pcm.ipc_key 5678293 (that’s 0x56a4d5).
  2. ipcs -a and look for the IPC key in hex. I had both 0x56a4d5 and 0x56a4d6. I… hope 0x56a4d6 belonged to ALSA because…
  3. First I made sure that the nattach column said 0 for any dmix-related segments/semaphores, then
  4. I used ipcrm -M 0x56a4d5 (and then ipcrm -M 0x56a4d6) to delete those shared memory segments, and ipcrm -S 0x56a4d5 to delete the semaphores (“Semaphore Arrays” is the heading; maybe simply saying “semaphores” is poor form on my part).

Then audio worked.

I am more and more looking forward to Fedora 8 and PulseAudio.

December 15, 2007

Audio in Linux is awesome

I’ve got some poorly recorded MP3s of people speaking. I want to try to make them a little easier to hear. In Windows I’d reach for Sound Forge. How about in Linux?

  1. Search Google for “sound forge equivalent for Linux.”
  2. Find several references to “Wave Forge.”
  3. Find “Wave Forge” hasn’t been updated this century. Move on.
  4. Decide to try Audacity because it’s in the back of your head, and Ardour because you found a bunch of links to it somewhere.
  5. yum install audacity ardour. That was easy.
  6. Run Ardour. Tells you it needs JACK. WTF is a JACK? Move on.
  7. Run Audacity. Loads. GUI looks a little silly compared to Sound Forge, but it looks functional enough.
  8. Try to load the MP3 file. Get told this version doesn’t have MP3 support.
  9. Fish around for something to decode an MP3 to a WAV file. Feel bad about considering installing xmms just because you remember how to do this with WinAmp. Rejoice when you find lame --decode.
  10. Load the WAV in Audacity. Looks good.
  11. Hit the play button. Get told there’s an error in sound output.
  12. Check sound preferences. Note that there are no available playback devices.
  13. Read around for a while about JACK. See references to jackd. Eventually realize that this is something you need to run yourself, as your own user.
  14. Run jackd. Get told that it can’t open the hardware device, presumably because other things (Amarok, Flash) are using it.
  15. Find the correct invocation to run jackd which is something like jackd -d alsa -d default (-d twice, WTF).
  16. jackd seems to keep running. Cross fingers, run Ardour. It opens.
  17. Look at the Ardour interface. Decide that (1) it’s not what I want, and (2) dear god that is ugly. Is that Tk? Motif? Holy hell. Run away.
  18. Open Audacity back up for the shit of it. Lo, there is some sort of JACK playback device now. Select it, hit OK.
  19. Click play button in Audacity. Error with sound card.
  20. Go into settings, change record device from OSS to JACK. (But I’m not recording?) Click play button in Audacity. Sound comes out! Rejoice.
  21. Select a section, figure out how to zoom in. Click play. Get an error telling you it can’t play.
  22. Try playing different selections, no selections. Keep getting the same error.
  23. Restart Audacity. Same error.
  24. Restart jackd. Restart Audacity. Same error.
  25. Read about qjackctl being very helpful. yum install qjackctl. That was easy.
  26. Run it. Not sure what I’m looking at. Says JACK is started. Try to turn on logging. Tells me I have to restart something. Whatever.
  27. Restart Audacity. Hit play. Same error.
  28. strace jackd. Hit play in Audacity. No activity.
  29. Stop jackd and tell qjackctl to start it. Get a pretty incomprehensible error message in its “log.”
  30. Realize that it’s bitching because it’s trying to start it with real-time priority, which it presumably doesn’t have permission to do.
  31. Read http://jackaudio.org/faq. “The simplest, and least-secure way to provide real-time privileges is running jackd as root. This has the disadvantage of also requiring all of JACK clients to run as root.” Yeah, no.
  32. Google around a bit, find out about /etc/security/limits.conf. Find some lines in there referring to @jackuser.
  33. Try to usermod -a -G jackuser myuser. Fails, presumably because my user is in LDAP (but the group is in /etc/group).
  34. vigr, add myself to the jackuser group by hand.
  35. Don’t want to restart my X session to get new groups. Figure I need to log in from scratch to get new limits. Fuck it, ssh localhost.
  36. Run qjackctl. Tell it to start jackd. Works. Rejoice.
  37. Run Audacity. Hit play. It works!
  38. Stop. Hit play again. It still works!
  39. Stop. Make a selection. Play the selection. Holy shit it played the selection!

Really not very much work at all. And Audacity only crashed, like, three or four times while I was using it! (Mostly when hitting the play button to play the section I was working on. I can’t remember if jackd exited too.)

I can’t wait for PulseAudio. I’m sure that will make all of this even easier.

Oh, but in the end, I just ended up using sox and normalize in a script to do my MP3s in a batch…

November 17, 2007

Restoring mailboxes to Cyrus and CRLF

I had to restore some Maildir-style mailboxes from an old Courier IMAP server to a newish Cyrus IMAP server. There’s not much to it, and this is well documented elsewhere, but basically you can copy the files in (I don’t know if they need to be named like 123. like Cyrus does by default), make sure permissions are right (and SELinux, if applicable), and invoke some combinations of reconstruct.

I did hit a few gotchas though:

  • Like I said above, don’t forget to restorecon -Rv on the restored files if you’re on a system with SELinux enabled.
  • If you have mailboxes, particularly folders underneath the inbox, you may need to invoke reconstruct with -p default. This registers them in mailboxes.db, I guess? It makes them show up in lm from cyradm which can’t be bad.
  • Related to the above, I don’t think reconstruct will believe a directory is a mailbox unless there is a cyrus.header file in it. Just touch cyrus.header was enough for me. (I was calling reconstruct -p default -xrf BTW; I don’t know if, for example, -x makes it rewrite cyrus.header to have valid/meaningful contents. It should be non-empty, I believe.)
  • Finally, Courier’s messages all just used a line feed for line endings. Cyrus demands DOS-style CRLF for line endings. Symptom for this: bringing up the message index (in Mutt, at least) goes much, much slower than it should. I believe this happens because too much information gets put in the Cyrus header/cache files. (Use the mbexamine program and you’ll see lots of weird information in the “headers” I think.) Fixing this for me was as simple as running unix2dos over the files (and then fixing permissions; I should have done sudo -u cyrus unix2dos [0-9]*. instead of just running unix2dos as root.)