Pages

Saturday, 5 November 2011

Working directory at PHP shutdown

I'm working on an online bookstore website, where I decided I needed to use session_set_save_handler in order to consolidate all web state in the site's mysql database.

One of the first things I do in nearly any software development project is to set up a convenient debug logging facility. PHP has error_log, but a) it seems like it's full of magic behaviour, b) it needs a string, so you have to flatten any structured data yourself, first, and c) I didn't know about it when I wrote debug(). What better thing to log than one's ad-hoc session handlers?

It's a good habit to check Apache's error log once in a while. That's where debug() originally wrote to, but because I don't necessarily have access to Apache's logs, I taught debug() to log to a file nearby the DocumentRoot. But today I checked Apache's error log, and lo and behold, it was full of errors:

[Fri Nov 04 22:45:47 2011] [error] [client ::1] PHP Warning:  fopen(../logs/mph.log): failed to open stream: No such file or directory in /home/berndj/public_html/mph-adhoc/lib/debug.php on line 7, referer: http://localhost/~berndj/mph-adhoc/web/index.php

After much head-scratching and source-diving, and poring over strace's log of what Apache does when I hit the page, I figured it out: PHP resets the working directory back to where it was before it executes a script, and only then runs destructors and other shutdown functions - one of which I assume to call session_write_close [1].That left some of my session handlers running in a directory where I didn't expect them to. (The root directory, to be precise.) And that was why the attempts to fopen a relative path were failing - there is no ".." in the root directory, let alone a logs directory [2].

I guess I have to choose a place to call session_write_close - somewhere that guarantees it always gets called, yet before PHP starts cleaning up (is this possible?). Perhaps my compromise will be to add the call to my boilerplate file - the one I paste into every new file when I start writing it. There's an extra benefit to doing this, because it allows me to get rid of one XXX tricky-code marker:

# XXX Don't call mysql_close() as session handling depends on having a DB connection.

[1] To confirm my understanding of what was happening, I wrote a demonstration of what happens to the working directory in PHP's session handling. Install this simple file into a PHP-enabled web directory, and watch its output change as you toggle the call to session_write_close.

[2] For a flight of counterfactual fancy, imagine if my log file were "../log/messages" and the website were set up to run from /var/www! The error description would then not be "No such file or directory" (ENOENT) but "Permission denied" (EACCESS). I'm not sure whether that would have helped or hindered my search for an explanation. It may well have prompted me to reach for strace sooner.

Wednesday, 26 October 2011

How to waste three hours

Earlier today (aka "yesterday") I wasted a few hour trying to figure out why Smarty hated me. In a project I'm working on, I need a navigation bar that appears on every page, so it's natural to sequester the navigation bar HTML into a sub-template. In the page's toplevel template, I had:


but when I refreshed my page, the navigation area contained, literally, {include file="mainmenu.tpl"}, and not the menu HTML as I expected.

It was just me though: Smarty was singling me out! Its own demo template also includes a sub-template, but that worked just fine.

Round and round I went, trying random settings. Perhaps Smarty wasn't finding its "sysplugins"? Or maybe it had cached Something Bad from an earlier attempt that was wrong for different reasons. Refresh, refresh, fiddle, refresh, curse!

Eventually I got so frustrated I started questioning the fundamentals [1]. Do variable expansions at least work? {$foo} — No! Not working! Just another grinning curly brace pair pseudo-emoticon taunting me in the browser tab!

At least the light went on. Then I discovered what I had forgotten:

$smarty->left_delimiter = '«';
$smarty->right_delimiter = '»';

Guillemet! (I need different delimiters to avoid having to escape CSS declarations sprinkled all over.) Having fixed my Smarty syntax to use those sneaky U+00AB and U+00BB instead of those ubiquitous curly braces, order is restored. Everything works again, and I shall hopefully never again forget that I am using nonstandard delimiters in this project.

[1] Why do I seem always to question my assumptions only when I am at the end of my tether? Recognizing this pattern is as frustrating as knowing that whatever I look for, will only be in the last pocket I search.

Tuesday, 25 October 2011

Another cost of staff turnover

At an undisclosed business, there has been a history of complete staff replacements over the years. In fact when I started working there, the entire development team (two people) had recently quit and left, with a night watchman soaking up as much as he could during their notice period and then helping me to wrap my head around things.

I coped quite well overall; somehow I was able to assimilate most of the important areas of the project quite quickly. It helps that I actively enjoy reading code - if only I knew how to get paid for rendering a code review service.

But every now and then, and invariably in the code that deals with Berkeley DB files, there'd be some lurking anti-aesthetic singularity, an infinite source of code entropy. Just reading that code makes me want to cry - it is beyond the point of equilibrium between fixing and introducing bugs. And because I rarely need to fight a fire there (which seems a miracle), I have had only sporadic and patchy understanding of it - I never truly grokked it all. Perhaps if I had spent twenty years working there instead of only four, I would have accumulated enough brain circuitry to achieve that level of oneness with the code.

That is the problem: nobody ever did - nobody from any of the previous development teams held tenure for even as short a period as I did. (Actually the night watchman is still there, in the morning, the following week, but he deals nearly exclusively with the codebase he and I accidentally rewrote - a codebase I like to believe is not yet at entropic equilibrium.)

This is a problem that costs you, whether in profit as an owner, or in boardroom clout as a manager, or in customer perceptions as a salesperson. It costs you because when there is a problem in such hairy code we aren't familiar with [1], we can't make any reliable estimates, let alone promises, about how long it will take to fix, or even if it is possible at all to fix. In statistical terms: this unfamiliarity represents a high variance in the cost of doing our business of software development.

So quit thinking of people as perfectly interchangeable units of production, as commodified development resources [2]. We might all be capable of learning new technologies, of gaining an effective familiarity with most of the work we do, and of replacing colleagues. It isn't a cheap replacement though - not like you can schedule one night of downtime in a factory and replace a few bearings. And as far as corporate lackeys, worker drones, and cogs in a machine go, we're a pretty expensive means of production. So keep your machine oiled, and try to keep the sand out. Do the fuzzy math - figure out what level of oiling and maintenance is cost-effective.

[1] You may argue that it is our (developers') job to become familiar with the code. But at whose expense do we study this rarely-visited code? Do we stop developing the new features you clamour for in order to study code we might not need to change for another few months? Shall we ignore other, directly visible bugs? There is a significant opportunity cost to having developers do anything, if we aren't sitting around waiting for work. (Yes, sometimes we goof off, and there lies a valid criticism - but not the same one. And normally we try to make up the debt somehow. We might not write it down on paper, but we - I anyway - do mentally "keep score".)

[2] The company where I worked before the undisclosed business that is the setting of this post, had a penchant for referring to developers as "resources" - both in speech and in writing. I understand that "resource" is a fairly standard part of the project management jargon, and is quite appropriate in that abstract world where one is pushing long candlesticks around on a Gantt chart. I find it quite offensive, though, to use such a dehumanizing term in reference to particular people and teams.

Sunday, 18 September 2011

OMF support for binutils

The setting

Some years back I was working on an exciting project: at Prism Payment Technologies we were building self-service terminals destined for fuel stations in Kuala Lumpur.  These terminals were miniaturized 90s-era PC-compatible computers with a PC/104 bus: to add peripherals one simply stacked the boards onto the previous board's million-pin header.  Responsible for pretty much all the firmware running on these critters, it felt like I had been transported back to 1992, back when I was trying to master all the PC's standard(ish) peripherals.  I even got to tie up the one loose end I never got to in the 90s: programming the VGA registers!

The code was all in C, and we used TopSpeed C to compile the project.  It was a mixture of little bits of assembly for the ISRs, a few third-party libraries (a TCP/IP stack) and a real-time operating system (uCOS), and our own IFSF protocol stack and application.  When I inherited the project, 640K was just barely enough for us - it was a constant battle for bytes in order to make everything fit into memory.  Some days I would need to add a feature, but be unable to run the firmware as the last few of the 640K bytes had been consumed.  Then I would spend the next day or so, painfully inspecting each of the usual memory-hog suspects, searching for arrays to shrink.

Enter GNU binutils

I knew objdump could, in principle, tell me exactly which object modules were the ones most likely to harbour hogs.  Yet there was no OMF support - that FOO.OBJ object file format familiar to DOS programmers.  (TopSpeed used the same format, and luckily it was same-enough.  More in [1].)

Rather than spend frustrating hours manually searching for unreasonably large variables, I decided to teach libbfd how to read OMF object files.  Pretty soon I was able to answer my needs efficiently - there had indeed been several large buffers that no code was using.

Over time I extended the OMF port to support most of the common features of these object files.  Wrapping my head around relocations was the hardest part - the BFD concept of relocations in particular, because it is (necessarily) so complex due to the many ports' quirky features it has to address.

Dinosaur mating season

But before I could get the GNU paperwork through with the comparatively Open Source-friendly [2] Prism management, mating season arrived and the company had new owners.  A much bigger company, whose HR handbook seemed to consist of variations of the theme, "Lift the drawbridges - keep the barbarians out".  It became a sufficiently unpleasant environment for many of us Prismers (say that fast) that about one third of us quit - including me.  I simply didn't have the energy to convince a likely-to-be impersuasible and anecdotally abusive Kaiser to sign the GNU paperwork: redoing it all from scratch seemed the better deal.

Second time lucky

In the two weeks between jobs, I hacked and hacked and hacked deep into the night, sweating to implement enough OMF support to be useful, while my memory of the file format was still fresh.  It was a real reimplementation: I didn't simply copy&paste my earlier code - I knew that it was now off-limits and would taint anything that I derived from it.  I'm pretty sure the result was better in some respects than what I had had at Prism - the code certainly smelled cleaner.

But eventually I got busy enough at my day job that I didn't have enough mental bandwidth to devote to finishing my BFD port, so my efforts faded and then stopped altogether for a few years.  At that point objdump could answer most of my questions, save for external symbols and relocations.

At some point I asked my boss Rob Love to sign the employer disclaimer of rights (a necessary evil part of the GNU paper trail); he did so with enthusiasm (thanks!) and I no longer had an excuse to continue procrastinating.  I spent a while rebasing my between-jobs patches to the current binutils code, and reacquainting myself with OMF.

Desert Wandering

Since then I've left Rocketseed (email marketing ≠ I'm a spammer, but it's fun to tell dance class girls that antifact) and am now retired / freelancing / unemployed [3], so I've had time to plug a few holes: my port now understands external symbols, relocations, and a few other minor features.

The result

Every project has a foo.ext.  Here's mine:


segment text


extern bar
extern baz
global foo, reloc_kitty, reloc_foo, reloc_bar1, reloc_bar2


foo:
call bar
call baz
call baz + 10
call baz
call foo
call seg bar:bar
lea ax, [foo wrt seg bar]


reloc_kitty:
call text:kitty
reloc_foo:
dw foo
reloc_bar1:
dw bar + 10 wrt seg baz
reloc_bar2:
dw seg bar


segment trampoline
kitty: ret
dw trampoline wrt text

NASM assembles this not-useful-at-all code, and objdump -D -r -p foo.obj dumps it as:


foo.obj:     file format i386omf


Module name: foo.asm
Translator: The Netwide Assembler 2.10rc4
LNAMES:
  1 
  2 text
  3 trampoline
SEGDEF:
  text (2)
  trampoline (3)
GRPDEF:


BFD: Found 7 symbols




Disassembly of section text:


00000000 :
   0: e8 00 00             call   3
1: OFFPC16 bar+0xfffffffe
   3: e8 00 00             call   6
4: OFFPC16 baz+0xfffffffe
   6: e8 0a 00             call   13
7: OFFPC16 baz+0xfffffffe
   9: e8 00 00             call   c
a: OFFPC16 baz+0xfffffffe
   c: e8 f1 ff             call   0
   f: 9a 00 00 00 00       lcall  $0x0,$0x0
10: OFF16 bar
12: SEG bar
  14: 8d 06 00 00           lea    0x0,%ax
16: OFF16 text
16: WRTSEG bar


00000018 :
  18: 9a 00 00 00 00       lcall  $0x0,$0x0
19: OFF16 trampoline
1b: SEG text


0000001d :
...
1d: OFF16 text


0000001f :
  1f: 0a 00                 or     (%bx,%si),%al
1f: OFF16 bar
1f: WRTSEG baz


00000021 :
...
21: SEG bar


Disassembly of section trampoline:


00000000 :
   0: c3                   ret    
...
1: SEG trampoline
1: WRTSEG text


I'm not quite satisfied with the relocation type names I chose, especially that WRTSEG business.  It's necessary though, because in OMF, a relocation can ask for the offset of a symbol from the base of any segment, not only the segment in which its definition resides.

Show me the code!

Your wish is my command.  Behold, OMF support in binutils!  Also, a perl script to dump the OMF records: omfdump.

Notes

[1] TopSpeed C was one of the 80s-era compilers; one of the better ones IMHO, but not very well known.  The way I heard it (William Hayes seemed to know more of the back story), TopSpeed C was going to be Borland's C compiler, but when it took too long to ship, Borland acquired Wizard Systems and used their C compiler instead.  (The Wikipedia article on Borland doesn't specifically mention this shipping delay - this part might be apocryphal.)  The C compiler folk at Borland eventually spun off into TopSpeed.  TopSpeed C was the better compiler though; it understood volatile better than Microsoft C, Microsoft QuickC (I cut my C teeth on this one), and Turbo C.  Some of these got volatile partly right, others seemed simply to ignore the keyword.  TopSpeed C on the other hand knew to load or store a volatile variable from or to memory each time the C code referenced it, which whas close enough to completely right for my purposes.  (I needed the uCOS synchronization primitives to work right, and the async serial port handler not to lose characters.)  I was rather amused though when I found one compiler bug in TopSpeed C: it discarded comments before it processed backslash-newline line continuations!  Granted, this was before the days when // comments had been standardized.  As for the object file format that the TopSpeed compilers generated, it was OMF, but I have a sneaky suspicion they did at least one thing that is a little... unconventional: LEDATA records sometimes overflow the corresponding SEGDEF!  It's probably not a mortal sin, as a sufficiently smart decoder could realize that the segment was in fact larger than just the first SEGDEF with that segment's name: subsequent SEGDEFs seem to "extend" the first.  I don't have solid evidence to back all of this up, but it showed up while I was testing my port with an old object file I had lying around.

[2] I usually prefer to use the term "Free Software", but in this case I wanted the superset-meaning of the phrase "Open Source".  The Prism company culture was by no means welcoming of only Free Software!

[3] Honestly, I'm a bit bored with writing only software.  I really miss the hardware / software mix I was part of at Prism (that world is now forever lost; I have no illusions about wanting to go back there).  I'm currently using the occasional software work that comes in, combined with my intense frugality, as a runway to find fun again.  I'm hoping for something with a little bit of bricks and mortar.  In fact, I should be learning to weld, not blogging about code.

Wednesday, 31 August 2011

Reverse engineering SARS' queueing system

On Monday I had to visit SARS to submit an IRP6.  I'm what's called a "provisional taxpayer", which means I get to deal with the taxman about twice as often as people who aren't.

Previously their queueing system was just a long line of people waiting their turn.  Then, some years ago they started offering chairs - a little more comfortable, but the queueing system was basically the same: one's seat determined one's position in the queue, so it was like a big game of Musical Chairs.  And finally, about a year ago, SARS started issuing numbers at the reception desk: one could now sit anywhere and simply wait right there until one's number came up.  The numbers appear on a screen, but also, a Stephanie Hawking type computer voice reads the numbers aloud.

I had been trying to read a few pages of The Great Disruption, but Stephanie's staccato voice proved too distracting - like a numbers station's siren call.  So I decided to make lemonade.  For a bit more than an hour I jotted down the time and the new number - with the aim to write exactly this post.

Now that I see the graph, it's a bit disappointing.  I was hoping to see points along more lines of different slopes and intercepts than the two segments visible here - which I know both to be from the "INCOME TAX RETURNS" stream.  (By the way, why are these things always in all-caps?)  There seemed to be multiple virtual queues running simultaneously, each allocating numbers from distinct ranges.  Perhaps I just didn't collect enough data - perhaps somebody needs to spend a whole day there.  Maybe next time.

Wednesday, 24 August 2011

Time lapse movie from stills recipe

Using external tools to sort stills, and stdin as "listfile":

ls -rt webcam-20110819-1* |mencoder -o /tmp/couch.avi -mf fps=10:type=jpg mf://@/dev/stdin -ovc copy

I need this to check if the dog lies on the couch while I'm out shopping!

Monday, 22 August 2011

How many people get hit by celebratory gunshots into the air?


Nissemus tweets:
Wonder how many people have been hit by "celebratory" rounds fired into the air falling back to earth? #Libya
I've often wondered the same.  I think the number must be quite low.  Factors to consider:

  • If fired at a steep angle, the bullet is in the air so long that it has time to bleed off excess velocity, and to reach terminal velocity on the way down.  While still dangerous, possibly lethal (depending what part of the body it hits), it is not as dangerous as a bullet fired straight at a living target.
  • It will be very difficult to fire a round such that the projectile lands anywhere near the point of firing. If a large number of people celebrating a victory are gathered together, then a celebratory round fired from that crowd will be unlikely to hit anyone in that crowd.
  • Outside of such a gathered crowd, the density of people outdoors will be comparatively low.  Manila has the highest population density at 43079/km²; if every last resident of Manila proper were outdoors, they would still only cover about 1/50th of the target surface area available to an unaimed bullet.
  • Tripoli, specifically, is not as densely populated as Manila; Wikipedia notes a density of 4205/km².
  • Let's allow that most Tripoli residents are outdoors during the celebrations.
  • An upright human body might present about 0.5m² cross section for a falling bullet to hit.  Sometimes more, sometimes less.  Let's err on the side of caution.
Then, of every km² of land area, humans in Tripoli occupy at most about 2000m².  That is, 0.2%.  My guesstimate of an answer then to Nissemus' question would be that you might see one falling-bullet injury (not all of which would result in death) for every 500 rounds fired.

Monday, 4 July 2011

More Freeciv AI thoughts


I'm busy killing time with a solo game, and I have what superficially looks like a very good deal:

The economics report suggests that Adam Smith's Trading Company would save me 43 gold per turn, in perpetuity. It's in perpetuity regardless of when the savings start, so since the Company would be ready in 34 turns anyway due to the city's production capacity, buying it would save me 33 turns.

43 gold over 33 turns 1419 gold, and gold doesn't depreciate, so this is definitely a potential arbitrage opportunity: spend 680 now to save 1419. My kitty is sufficiently full (4026 gold), and it's a solo game, so I'm not too worried about opportunity costs - losing a war for want of 680 nails would be such a cost.

To figure out what interest rate discounts those 43 * 33 savings to 680 gold now, I head over to my online financial calculator, and do a manual pseudo-binary search on the interest rate. 61% per "year" just about nails it, which translates to 5.1% per turn.

Is my civilization growing, in some sense, at 5.1% per turn? The gold kitty (and the science bulbs kitty too, for that matter) don't attract interest the way money in the bank might. A Freeciv AI agent might not actually have an easy way to determine an interest rate - it would have to weigh many individual decisions at every turn in order to have an idea. My agent-based Freeciv AI does have a notion of "prevailing interest rate", but has no mechanism to determine what prevails. My current thinking is to list each possible decision, according to required discount rate in order for it to be worthwhile, then do all the most worthwhile things until there's no money, no time, or no movement left. The last such action would probably be the most definitive of the single civilization-wide risk-free rate of return, but by then that number itself would no longer be interesting.

There are complications, and I'm not sure how to resolve them. Firstly, actions are not independent. My AI breaks tasks down into subtasks, and these subtasks can sometimes serve multiple supertasks. (This is code in progress - I have no code collateral yet.) For example, if I need to build a road to a port city, and also to colonize another continent, building one settler as a subtask serves both supertasks: build the road to port, then get on a boat. Each supertask might individually be very low down on the list of worthwhile things to do, but sharing the subtask might push them to near the top. Another concern is quite straightforwardly related to SAT: each task claims some subset of resources, making them unavailable for others.

I'm unsure of how to address these complications; for now my best guess forward is perhaps Monte Carlo style AI: randomly choose some set of tasks to attempt, sort according to risk-free rate of return, and repeat. If any task consistently shows up among the winners, it is likely to be a good idea, so commit to that. Perhaps repeat the procedure on the remaining tasks, or just be lazy and pick a few more runners-up.

Monday, 27 June 2011

First hits on online calculator

My online calculators have gotten a handful of hits, and not all by my friends. Things are going slow, and it'll take me a few lifetimes before I get the cheque from Google, but it's nice to get that validation in the Apache log - that somebody out there saw my page in a search result and thought maybe that was what they were looking for.

A whole THREE hits, all of them on the electrical calculators:
  • "cable current capacity calculator" - somewhere in Dikgatlong Municipality
  • "volt drop calculator cable size" - Cybersmart netblock
  • "derating factor for temperature" - University of Venda
Half surprising that all three are from South Africa - but maybe that's just one of the ways Google orders their search results?

Sunday, 12 June 2011

Can we solve Ataxx?

Information in a 7x7 board: log(3^49)/log(2) = 77.66 bits, too many to fit into a 64-bit register.

Storing the value of each Ataxx position would require at least 2^77.66 trits (win, lose, or draw), which would require an insane amount of storage. No, naive brute force is crazy. But we could work "outside-in" - evaluating the fullest boards and storing their values in a table, then truncating a forward search whenever we hit one of these.

How many of these full boards are there? If n = number of gaps, m = number of black pieces, then there are 49!/(n!*(49-n)!) ways to distribute the gaps, and (49-n)!/(m!*(49-n-m)!) ways to distribute the black pieces around the gaps, for a total of 49!/(n!*m!*(49-n-m)!) positions with a given n gaps and m black pieces.

Interestingly, even a naive list of position values compresses really, really well:

berndj@capybara:~/anthraxx$ ./bruteforce |head -100000000 |bzip2 -c9 |wc
858 2691 107495

That's 100000000 trits (win, lose, or draw) compressing to 107495 bytes: a compression ratio of 184. All information should compress so well! I expect that with clever enumeration sequences, one can improve this compression ratio even more. Ternary Gray code anyone?

Some examples:

n m #positions raw_storage compressed_storage
0 20 28277527346376 7TB 38GB
0 30 18851684897584 5TB 26GB
1 20 820048293044904 205TB 1114GB
1 30 358182013054096 90TB 487GB
5 20 3358097760018881880 840 PB 4.6PB
5 30 219207391989106752 55PB 298TB

I've made a graph of compressed storage vs m, for n = 3. I didn't bother with m > 23 as the numbers are symmetrical about m = 23. It's clear that motivated wealthy individuals are able to store the values of all positions with only a few gaps, say, no more than 3.

Now how long might it take to populate such a vast table?

berndj@capybara:~/anthraxx$ time ./bruteforce |head -100000000 >/dev/null

real 0m34.547s
user 0m34.930s
sys 0m0.350s

Bear in mind that this is just a naive implementation of what is likely to be a very bitbashing-friendly computation (Ataxx was designed to be easier for computers to compute than for humans!) Furthermore, populating this table is embarrassingly parallel - I haven't even used all four of my poor little laptop's CPU cores. By my estimation I could populate the 3 gaps, 23 black table in no more than 66 years. Probably a lot quicker if I spend just another hour teaching bruteforce.c a better representation of board positions, and using a popcount instruction that may or may not exist on my CPU. I'm quite optimistic about even a modest effort being able to muster the CPU power to populate this precious lookup table. There's a caveat here though: this relatively cheap seed table must hold values only for stalemate positions. Completely full boards are all finished, but but only some boards with gaps are stalemated. I should probably adjust the storage requirements about 30% upwards to allow the table entries to encode the fact that the value is unknown - to be filled in later during a forwards search.

Tuesday, 1 March 2011

Making some online calculators

I wanna be a Google millionaire. I fiddled around with some javascript to implement a few calculators. One for electrical installations and one for electronics. It's pretty minimal for now, but as I get time and inspiration for more, I'll be adding them. One I definitely need to add soon is an income tax calculator. At least more people want to know their potential tax liability than want to know the derating factor for a PVC cable at 55°C ambient!

Wednesday, 16 February 2011

I never heard of Agnitas EMM until today

One of our servers has been blowing up occasionally. After a few failed attempts, I managed to use gdb to write out the file our process was trying to parse:


(gdb) call open("/tmp/x", 0101, 0777);
$17 = 3
(gdb) call write(3, body_data._M_dataplus._M_p, strlen(body_data._M_dataplus._M_p))
[wait... then hit ^C]
$18 = 1782948729

So clearly there's a missing NUL termination. Hardly surprising for this kind of bug. I just used dd to reduce the file to what seemed like the relevant data, exluding random junk from the process' memory.

And... bingo!

X-Mailer: Agnitas EMM 7.0
...
X-Barracuda-BRTS-Evidence: 5367d33c72fcbdfe74e38a30b0711cd7-9667-unk
X-Barracuda-BRTS-Evidence: c57ea21d3b9fd34cfd1e59d35e55f1c9-4515-unk
X-OriginalArrivalTime: 16 Feb 2011 13:34:07.0165 (UTC) FILETIME=[2F631ED0:01CBCDDE]
X-Recipient-Count: 1
X-Incoming-Message-Id: 1PphWB-0003Zl-Bu
X-Sender-Host-Name: mail.example.com
X-Sender-Host-Address: 192.168.17.42
X-Sender-Auth:

This is a multi-part message in MIME format.

---==AGNITASOUTER164240059B290156CA==
Content-Type: multipart/alternative;

boundary="-==AGNITASINNERB164240059B290156CA=="

The last five of those X- headers are ones we add. But look at that blank line between the MIME part's Content-Type: header and the boundary=... parameter. Just look at it! That blank line caused our process to not know what message part boundary to look for, so it just went into infinite loop apoplexy, gobbling up memory until the OOM killer zapped it.

Oh wow, they offer some presumably older version of EMM for download from sourceforge. Let's take a quick dive into their source code:


else if (n == 1) // online HTML
buf.append ("HContent-Type: multipart/alternative;" + data.eol +
"\tboundary=\"" + xmlStr (outerBoundary) + "\"" + data.eol);
...
if (n == 2)
buf.append ("Content-Type: multipart/alternative;" + data.eol +
" boundary=\"" + xmlStr (innerBoundary) + "\"" + data.eol +
data.eol +
"--" + xmlStr (innerBoundary) + data.eol);
...
else if (n == 1)
buf.append ("Content-Type: multipart/alternative;" + data.eol +
"\tboundary=\"" + xmlStr (outerBoundary) + "\"" + data.eol +
data.eol);
...
if (n == 2)
buf.append ("Content-Type: multipart/alternative;" + data.eol +
" boundary=\"" + xmlStr (innerBoundary) + "\"" + data.eol +
data.eol +
"--" + xmlStr (innerBoundary) + data.eol);

Nope, doesn't look like EMM is likely to be adding the extra newline. So I don't know who is. Maybe Barracuda Network are doing something nasty? I can't tell, because there's nothing relevant I can (easily) find on their website.

Tuesday, 1 February 2011

Grokking C declarations.

What does it mean when some bit of code declares

int (*foo(char))(double); // ?

Forget all those C-to-English recipes - if you're anything like me, shoehorning your thinking into natural language just confuses things.

Instead, chop off the atomic type specifier (the "int"), and think what the remaining expression would mean:

x = (*foo('x'))(3.1416);

Immediately it's clear that if you call foo with a char argument, you get a pointer, that you can dereference (not really necessary to do explicitly for function pointers), to get a function that you can call with a double argument, that returns an int.

If you really still need to know what foo "is": it's a function taking char, returning a pointer to a function taking double, returning int. Rather wordy, and now you still have to translate the English version into some more fundamental / more abstract prelinguistic mental structure.

Thanks to Lee for pointing this trick out to me a few years ago. I'm sure it's all over the Internet for those with whom the search is strong.