Intel Impatience

January 18th, 2006

On the morning of January 10, I woke up expecting to waste a couple hours of my day scouring web sites and IRC channels for the latest transcriptions of Steve Jobs’ Macworld keynote speech. It didn’t really bother me that it was not being streamed live this year, because every other year when they’ve tried that, I’ve ended up squinting at a low-quality, slow-updating video and not really getting much more than I would have from the painstakingly verbatim notes I read live on IRC.

While I was expecting to throw away a couple hours, I didn’t yet know I’d be throwing away a couple thousand dollars. [“Throwing away” will seem less appropriate once I actually get the hardware!] I knew I needed/wanted to get an Intel box sometime soon, in order to start testing my own software. I’d made whatever “transition” was necessary in theory months ago, but in practice I’d yet to try the apps out in a live Intel box.

So when I saw the Intel iMac, I sighed. No Intel for me, I thought. I had subconsciously formulated an Intel fantasy that involved buying an Intel-based Mini, using it in the common area as a media center, and rebooting it into Windows XP as necessary for my work. I already use VPN for almost all of my PC needs. The ugly Dell sits underneath my desk, serving mostly as a foot warmer and sometimes as a reminder of just how good we have in the Mac world.

When the MacBook Pro was announced, however, my ears perked up. My “portability” currently consists of a 500MHz iBook G4, stuck on 10.3 because I’ve been too lazy to figure out how to install Tiger from a hard drive (no DVD drive). This announcement gelled for me a magic combination of want and need. I need an Intel machine to test against. I want a laptop to cruise to the cafe with. OK, sold! I placed my order at 11:08AM PST. The problem is, it’s not slated to ship until February.

I’m getting impatient! Other people are getting iMacs, and I’m stuck on the waiting list. I’m considering switching my order to an iMac – perhaps I can replace my Dual G5 2.0GHz with… an iMac? If there was ever a good time to sell a powerful G5, it’s probably now. While the G5 still runs Photoshop faster than an Intel-based Mac.

I got so impatient to try out my in-house applications, that I built Universal copies and posted them in a private section of my web site. I then went to my local Apple Store (after calling ahead to confirm that they got their Intel macs on schedule) to do a little test-drive.

I walked into the Apple Store and started scanning for the word “Intel.” Nowhere to be found. There were tons of iMac G5 displays, but no Intel iMac! Damn it! They lied to me. As a last stab at success, I asked an employee, “Are there are any Intel iMacs on display?” I tried to feign consumerism so he wouldn’t get suspicious about my motives. “Sure, we have one right over here. There isn’t much on it, but you can surf the web or whatever.” Something tells me Apple wouldn’t be too happy to have their employees describe the “full suite of Apple applications” as “nothing much.”

The machine he escorted to me was indeed Intel-based, despite the outdated “iMac G5 – $1299” placard next to it. I guess a side-effect of not changing the price is that it allows retail stores to be lazy. I went straight to my web page, downloaded my universal apps, and attempted to launch them from the Desktop. “You are not allowed to launch this application.” What! Oh crap, I’m going to have to come clean about my motives. I thought there was like a 10% chance the Apple store employees would take pity or be awed by my plight, and let me loose on the machine. But the 90% chance of them just saying “No” and then watching me more carefully informed my decision to just play around a bit.

In a short while I figured out how to work around the security. And that made me feel pretty bad-ass. I’ve got code to debug, damn it! I excitedly launched my applications, only to learn that they were in fact not Intel compatible. I was really expecting them to launch and run without a hitch. “Well, better to find out now than to ship and let my customers find out!” I rationalized. But I was a bit bummed. I basically do things “by the book.” Why don’t my apps run? In fact, one runs and one crashes. But the root cause appears to be the same. I’m getting some kind of NSUnarchiver based exception. Does this look familiar to anybody?

2006-01-17 15:48:58.058 Clarion[1011] An uncaught exception was raised
2006-01-17 15:48:58.168 Clarion[1011] *** file inconsistency: read ‘I’, expecting ‘L’
2006-01-17 15:48:58.168 Clarion[1011] *** Uncaught exception: <NSArchiverArchiveInconsistency> *** file inconsistency: read ‘I’, expecting ‘L’

It’s hard to debug the problem at the Apple store, since they don’t have Developer Tools installed. So I took some notes and scp’d them up to my web server from the store. When I got home I examined the logs (and a stack trace in the crasher case) more carefully.

I’m assuming there’s some data somewhere that is supposed to look like “LIST” or something but is coming back “ILTS” (or would it be “TSIL”?) due to byte-swapping issues. A quick scan of my sources doesn’t reveal any funky archiver behavior, and I can’t find any references to this type of error alongside “Intel” in the developer list archives. I searched my Nib files for “LI” and “IL” but didn’t find anything particularly interesting. The only comments I can find related to archiving issues in Apple’s documentation have to do with archiving bitfield values. What am I doing in common in both of my apps that would cause this behavior?

I will probably go back down there today or tomorrow to try to get some more information, but I thought it might ring a bell with somebody.

Update: I just noticed that my applications are using “Pre-10.2 Nib” format. I guess I never updated them, or something. I wonder if this could be the problem. I notice that my “objects.nib” file in the Pre-10.2 version does have a lot of “I”s in it, while the keyedobjects.nib file from a “10.2 and later” nib has a lot of “L”s.

21 Responses to “Intel Impatience”

  1. Lee Harvey Osmond Says:

    I’ve been a WebObjects programmer for a long time. So long in fact, that when I was still primarily an AppKit programmer, I was doing fat builds on OPENSTEP, x86 and 68k.

    Have a look at eg /Developer/SDKs/MacOSX10.4u.sdk/System/Library/Frameworks/AppKit.framework/Versions/C/Headers/NSMatrix.h for how to lay out bitfields that need to be persistent.

    NSArchiver doesn’t do anything special about endianity. It doesn’t matter with objects. You will probably want to use the inline functions in Foundation/NSByteOrder.h to ensure that anything written to or read from disk is in network byte order, that is to say bigendian, that is to say PPC — so when you’re done, on PPC your code should read and write the same byte sequence as the pre-modified version, and on Intel you should be able to read and write PPC-originated archives.

    Incidentally — try using the disassembler in ‘otool’. I always used to build fat for m68k because the disassembled code was easier to head than its hppa/x86/sparc equivalent. These days, x86 is going to be easier to read than ppc!

  2. Matt Says:

    1) To install 10.4 on your iBook, put it in Target Disk Mode and hook it up to a newer Mac. Then boot the newer machine from the installer disk and enjoy.

    2) The guy at the store probably said it didn’t have much software on the new machine because they didn’t yet have an in-house disk image to put on it. Apple retail stores pull machines out of inventory to use as demos, then copy custom disk images to them so they’ve got tons of software. If they hadn’t done that yet, the machine would be exactly as it was when it came out of the box.

    3) If you’d told them what you were actually doing, they probably would have been excited to help (unless the story was too busy).

  3. Daniel Jalkut Says:

    LHO: Thanks, but I don’t think I have any bitfields in my code. So it’s not really applicable to me. I was hoping for something else in the “FAQ” section about archiving. But I have since learned that I may be an exceptional case. I’m currently investigating with the help of a “smart Apple person” so I’ll update this entry with anything I find out.

    Matt: beautifully simple. I will definitely try that Target Disk Mode trick. Sounds much easier than burning CDs or installing from a Firewire drive or whatever.

    About the “not having much on it,” you’re probably right. I guess I haven’t paid attention to exactly what they demo on the other machines. I’ve always assumed it was basically the standard Apple stuff.

    It seems like the Apple stores are always busy lately! Thank goodness :)

  4. Eyal Redler Says:


    I had the same problem but not with the intel macs. This is basically related to some changes that have been made in teh way @encode works. You can find many details about this in the post (and responses) I made in the Xcode list.
    Hope it helps.

  5. Andy Lee Says:

    So you were able to launch Terminal in order to run scp? I was in an Apple Store recently and wanted to check something with Terminal, but they had put Terminal.app into a password-protected zip file. I got around it by downloading iTerm (and deleting it when I was done).

  6. Daniel Jalkut Says:

    Thanks, Eyal. That does look suspiciously similar. But if there is such a problem – why aren’t more applications’ archived documents, for instance, failing to open?

    Andy: The “security” must vary from store to store. At Cambridge, they have all the applications out in the open but have some limitation in place that causes the Finder to allow the user only to open certain applications.

    Since it’s looking like I may not need to go back to the Apple store for the rest of my debugging – I’ll share the trick for launching apps in this circumstance. Whatever the security mechanism is seems to be limited to the Finder. So to launch any application you want to, just open the Script Editor and run a script:

    tell application “Terminal” to activate

    Of course, if they’ve locked up the Terminal application it won’t help. But downloading iTerm was quick thinking.

  7. Eyal Redler Says:

    Daniel, I don’t know why more applciations don’t show it. It could be that most people would use ‘unsigned int’ and not ‘unsigned long’. For sure, you don’t see this problem “out there” because people fixed it before releasing their applications…

    At any rate I was able to fix this by changing the definition from ‘unsigned long’ to ‘unsigned int’. I’m assuming that the system on the Apple store was “virgin” and didn’t contain any previous installation of your app so the archive must be some sort of resource you install on the first time, a resource that may have been created using previously compiled versions of your app. Do you have any such resources?

  8. Daniel Jalkut Says:

    Eyal – indeed! Very interesting. I was going to say that I didn’t have any “unsigned long” encodings in my archived objects, but then I looked more closely at the CoreAudio type “MusicDeviceInstrumentID” which I’m encoding. Indeed, it’s a “UInt32” which is in turn an unsigned long.

    If unsigned long and therefore “UInt32” is broken for encoding, this problem seems very bad, indeed. I don’t feel very good about switching downgrading the specificity of types like MusicDeviceInstrumentID just to gain the correct functionality.

    You’re right that I am including some pre-archived objects – in my user defaults to be precise. But by the sounds of things it sounds like my archived preferences will exhibit the same problem for users who saved a preference on an earlier version and are upgrading to the latest.

    I’m assuming I didn’t see this problem earlier because I’m still using gcc 3.3 for my PowerPC build.

  9. Eyal Redler Says:

    If you look in the documentation for @encode you’ll notice that ‘I’ is for unsigned int and ‘L’ is for unsigned long, now, the program fails giving you “read ‘I’, expecting ‘L’ which means that, when the archive was created, @encode produced ‘I’ for unsigned long which is wrong according to the spec.
    Basically this is an error that existed with previous compilers and was only “fixed” recently. For all practical purposes, you’ve always had it as unsigned int so I don’t see any harm changing it “officially” right now (which is what I did) there is no differance between the two anyway.
    Of-course, if you have it as one type in one class you could also typecast it only when unarchiving this particular class:
    [coder decodeValueOfObjCType:@encode(unsigned int) at:&myInstanceVar]
    In my case this wasn’t really possible since I’m using it all over the place.

  10. Daniel Jalkut Says:

    Eyal – after closer examination, I have discovered that the problem in my applications is slightly different than the one you describe. In my tests, @encode(unsigned long) works as expected always – with both gcc 3.3 and gcc 4.0. It’s UInt32 (and derivative types) that seem to cause a problem.

    I wonder if a class of these issues was addressed in the 4.0->4.0.1 gcc upgrade, and I’m just running into a residual one?

    In my sources, I have chosen to hardcode “I” instead of using @encode as a temporary fix. This seems valid and safe since it’s a compile-time value in any case, and I’m just forcing it to be the value I know is in my existing archives.

  11. Eyal Redler Says:

    Daniel – I didn’t mention it in my response but I too experienced it though a type based on UInt32 so I was never actually encoding unsigned long directly so it does look like the same case. I guess this is the reason apple didn’t notice that.
    I wonder if you plan to get this fix tested in the apple store after all or you’re going to wait for your intel mac.

  12. Daniel Jalkut Says:

    I had a chance to spend a few hours today with an Intel iMac with Xcode installed. (If you’re reading, thanks again!). All of my products now “check out” with the encoding problem worked-around. But there were a few other changes I wanted to make before releasing updates, so I think I’ll probably head to the Apple Store one final time to do a “sanity check” before releasing them.

  13. Peter Adler Says:

    If it’s 500MHz, it’s not an iBook G4. The G3s are 300-900MHz (clamshell: 300-466, rectangular 500-900), while the G4s are 800MHz-1.42GHz. The overlap is at 800MHz, where there are both 12″ G3 and G4 models; the G4 models say “iBook G4” conveniently on the bottom of the display.

  14. Daniel Jalkut Says:

    Peter: You’re absolutely right. I am so out of date I’ve undersold my out of dateness! I’m working with a G3 500Mhz iBook. Do I deserve a MacBook or what?

  15. Peter Adler Says:

    Well, probably. But given the history of 1.0 releases (and personally, I’m really pissed off that they left the FW800 port off), I’d wait for the second-gen, even if it takes six months.

    Anyway, that’s my own plan…

  16. Volker Says:

    iBook TargetDisk mode installation: Works like a charm. Had to install a PowerBook with a non functional superdrive (due to car rolling over the PB). Did take 15 minutes or so.

  17. Andy Lee Says:

    Daniel, thanks for the Script Editor tip for launching Terminal. I’ll keep it in mind in case I need it some day.

    Oh, and I’ll add my thumbs up to the Target Disk mode approach. I used it at work a few months ago to install OS X on a G3 tower (I think) that didn’t have a DVD drive.

  18. Jon Hendry Says:

    “But given the history of 1.0 releases (and personally, I’m really pissed off that they left the FW800 port off)”

    On the other hand, the little card slot has plenty of bandwidth, and could conceivably support a card with two FW800 ports (though I’m not sure how they’d handle the connectors).

  19. Jon Hendry Says:

    “If it’s 500MHz, it’s not an iBook G4. ”

    Also, I don’t think the G4s were offered without a DVD player.

    I, too, am using a 500 MHz G3 iBook, and placed my order for the new Book on keynote day.

  20. Peter Adler Says:

    On the other hand, the little card slot has plenty of bandwidth, and could conceivably support a card with two FW800 ports

    Fair enough. And as soon as someone actually manufactures a FW800 Express Card, then that may be less of an issue…at least for some people (not for me; I want it on the motherboard, so I can use the card slot for other things).

    However, conceiving of such a card doesn’t mean the card exists today. At least as far as I can find, no one manufactures any Express Card slot devices today. So anyone who buys a MacBook will have a slot that’s useful in theory, but may not be useful in practice for many months. No thanks.

    I find this especially infuriating in light of two developments: the stories I heard from both Apple employees and from Oxford Semiconductor’s rep at the show that FW800 had been left off specifically because Intel fought with Apple, and Apple caved to get the product out in time; and the FUD comment earlier this week from a Gartner analyst (quoted in an eWeek article about Intel’s Apple group) that Apple needed to use commodity Intel motherboards in order to capitalize on economies of scale. To me, this suggests that Intel thinks Apple’s just another clone company, who’ll buy whatever crap Intel can shovel out their doors.

    (though I’m not sure how they’d handle the connectors)

    Probably with a dongle.

  21. Red Sweater Blog » Blog Archive » Universal Appeal Says:

    […] In addition to more tangible features, both FastScripts and Clarion are now Universal – tested on a Core Duo iMac. To Apple’s credit, I spent more time getting to the iMac (I don’t own one yet – waiting impatiently for the MacBook Pro) than I did testing or fixing bugs. The only issue encountered turned out not to be an Intel issue at all, but rather a gcc 4.0 bug fix that causes a serious incompatibility with some gcc 3.3 code. Everything else “just worked.” Go Apple! […]

Comments are Closed.

Follow the Conversation

Stay up-to-date by subscribing to the Comments RSS Feed for this entry.