Why is it called a “core dump” anyway?

Star Trek: TNG Warp Core BreachThe diagnostic file emitted by a crashing process in a modern operating system can contain a variety of useful information, including exception type, current instruction, CPU state, call stack, and sometimes the entire contents of the current thread’s stack or even the entire process heap. So why is it called a “core dump”?

For years I thought this was an amusing Star Trek reference by the original implementors of UNIX, after all the episodes in which the Enterprise‘s reactor threatens to explode and Geordi has to save them by “dumping the warp core,” but it turns out the actual explanation is much more prosaic.

Ferrite Core MemoryIn the days before computers used capacitor-based dRAM, the dominant technology for main memory was to store bits as magnetic polarization in a grid of tiny ferrite cores (iron rings). Thus a machine’s main memory was literally called core memory, or simply core. When a computer of this era crashed, it would simply output the entire contents of main memory to the punchcard printer, literally dumping core to output. Later, these core dumps became large files on the machine’s drum or disk drive, and eventually core memory became obsolete in favor of static and dynamic RAM, but the name remained.

If that sounds painful, consider the Whirlwind computer developed at MIT around 1951 (pictured below). When this 2kB, 0.04mHz behemoth crashed, it would simply display the entire contents of memory as a string of octal numbers on a dedicated CRT screen. Then, an automated camera would take a picture of this CRT on microfilm1. You, the programmer, would get the developed microfilm the next morning and display it on a projector, which would be your crash debugger. Operand highlighting was done with a brightly colored marker on the film transparency, and the disassembler was a guy you called on the phone to ask what instruction 0125715 meant. At least the dump files themselves were small — about 35 millimeters, more or less.


1951 computer control room with CRT display and vacuum tubes

Control room for MIT's Whirlwind computer, circa 1951

1Everett, R.R. The Whirlwind I Computer. Proceedings of the 1951 Joint AIEE-IRE Computer Conference, pp. 70-74, Philadelphia, PA, 1951.

2 Comments

  1. Mike Dunlavey says:

    Hi Elan,

    When I started programming, core memory was all there was. I even had to reboot using an actual bootstrap loader. And, on the Apollo guidance computer, there was a kind of read-only memory called “rope” or “braid”.

    OK, enough bragging 🙂

    If you want to colaborate on something about performance tuning, as I said, that would be fun.
    I wrote a book of which that was a part (Building Better Applications – lousy title), and a DDJ article (11/93). The book didn’t sell much, but now it seems to be a collector’s item. I think on stackoverflow I’ve somewhat refined the explanation. Currently I’m trying to get the book up on Google, but it’s in limbo.

    Take care,
    Mike

  2. Ambient Sheep says:

    Heh, considering that Unix came about around 1970, but that ST:TNG with Geordi and his warp cores didn’t start airing until 1987, that really WOULD have made the Unix designers way ahead of their time…

Leave a Reply