[self note]

October 4th, 2012 — 11:20am

While working on the most recent patch to Hazel, I stumbled across some code that made me scratch my head. It seemed the code was unnecessary and was causing the bug I was trying to fix. I ended up removing that code while doing the fix thinking I must have been drunk/high/had a gun to my head when I wrote that code originally. Initial tests seemed to confirm it as things were working at first. A couple days later I discovered an older bug had resurfaced. Sure enough, I began to realize why I wrote that odd piece of code back when. I ended up backing out my recent changes and making the fix with the original code intact. I also made another change which was to comment the code explaining why the hell I wrote it that way in the first place.

[self now] != [self future]

One of the nice things about being a solo developer is that you don’t have to deal with the overhead involved when working on a team. You don’t have issues with miscommunication that you would when working with others. Or do you? Like it or not, you are still a team. Sure, you probably don’t need to communicate things with someone else about a current issue, but you, yourself, now, is not the same person as yourself a year from now. You’ll find many times that intentions are lost through the annals of time so documenting things as if there are other programmers on your team is a worthy exercise. Comment your code. Write descriptive commit messages. Edit your wiki. Your future self will thank you for it.

8 comments » | Debugging, Programming

Modus Operandi

June 1st, 2012 — 9:18pm

Apple devs have had a lot to talk about in the past couple years. The App Store has changed the landscape in significant ways. As devs, we’re constantly concerned with issues of pricing, conformance to Apple’s rules, marketing, advertising… I may post about those issues sometime in the future but not now. There’s been a bit of soul searching amongst devs lately and I think it’s important for me to step back and talk about where I’m coming from and what motivates me. What I’m finding in a lot of these discussions is that some people seem to think that the raison d’etre for doing anything commerce related is to make as much money as you can with everything else being peripheral to that. There seems to be a disconnect when talking to people about my business. Where they are talking about profits and growth, I’m thinking in terms of cool things I can add to my product.

Sure, one of my goals is to make money as I want to make a decent living but beyond a certain point, the money doesn’t interest me so much. I find the most important things in my life aren’t bought. Yes, I’m fortunate enough that I make enough now to live comfortably and I understand that that is a luxury but when I look back on my life, I fondly remember what I’ve done and the people I’ve met, not how much money I’ve made.

For me, it’s about the product. I wrote Hazel because I needed it and when I realized other people did as well, I seized the opportunity. Many of the jobs I’ve had in life were working on products that I didn’t use myself. Sure, there were interesting technical challenges and many of the jobs paid well. While at times I was passionate about the work, I was rarely passionate about the product itself. Now that I’m creating something that I do use, it’s a world of difference. Some companies have a policy where employees can work on what they want for a small percentage of their time and what a difference it makes. Now imagine if they could do that 100% of the time.

As a result, my company is just a vehicle for selling the product. I could care less about growing the company into some major concern. If I had to make the choice, I’d fold my company in a heartbeat if it meant my product would live on.

Apparently this will sound strange to a certain segment of people, but I’m also not interested in having a huge number of customers, in and of itself. I don’t really get satisfaction from people who buy the product in a promo or based on hype and don’t use it. I’m not hit-driven. I want users buying my product, not consumers. What really motivates me is when I hear about people that have been using it for years and it’s one the first things they install whenever they get a new machine. It’s when people get excited as I am about new features. It’s when users comes up with unique ways of using the product that I wouldn’t have come up with myself.

And Hazel’s not just some thing that I put out there only to move on to the next thing. I think I’ve shown that Hazel is a long term commitment for me (nearing 6 years). I intend to keep working on it until it doesn’t make sense anymore or external forces somehow shut it down. I haven’t put out any other apps besides Hazel so far, but when I do, I’ll try and make sure they are things I would use myself. I feel that  scratching your own itch can’t be replaced by stock options in terms of the commitment one has to the product. And if it does come down to having to move on, I’ll try my hardest to make sure that the product can live on in some form or another.

I consider myself very lucky. Going indie has been the best career decision I’ve made and I’ve been fortunate that it has panned out for me. And I intend to stay indie. I don’t have an exit strategy and I’m not looking for a big payout. I’m doing what I want and doing it on my own terms. This is it for me and it’s how I want it to be.

2 comments » | Business, Noodlesoft

Syntax Coloring For Fun And Profit

May 29th, 2012 — 9:32am

At the last NYC Cocoaheads meeting, I did a presentation entitled “Let’s Build a Text Editor: A Practical Introduction to the Cocoa Text System.” In it, I basically go through building a code editor from scratch, pointing out how to use the Cocoa text system along the way. I’m not going to reprise the whole thing here. The first part was an introduction to the basics of creating a document-based app and fitting the text system into that and the second part dealt with adding line numbering, which, if you’ve been paying attention at home, I covered here a while back. Instead, I’m going to focus on the third part, which is implementing syntax coloring.

One major aspect of syntax coloring is actually parsing the text. And guess what? I’m not going to cover that here either. There are books you should read on the topic. Like many people during their college years, I learned from the Dragon book though there are probably books more specific to parsing that you can find out there. Keep in mind, though, that you may not have to write as sophisticated a parser as you think. Sometimes, a lexical scanner is all that you need. Just parse the different elements you want to highlight (comments, strings, keywords, etc.) and ignore the syntactic structure. In some cases, this may be enough.

The focus of this article will be getting the Cocoa text system to do the coloring based on what your parser parses. Before we start, you should download the example project as this article refers to it throughout. I’ve created various snapshots at key points to show the progression. If you want, you can go through the various snapshots to see how the app was built up but for the purposes of this article, start at the snapshot called “Version 4″.

The Basic Approach

At the core of Cocoa text system is the NSTextStorage object. It stores the actual text as well as any formatting attributes. Being a subclass of NSMutableAttributedString, NSTextStorage allows you to apply different attributes to ranges of characters. For syntax highlighting, all we care about is the NSForegroundColorAttributeName attribute, which is the attribute that affects the font color.

If you look at the -parse: method in NoodleSyntaxHighlighter, you’ll see that it scans for C-style comments (/*...*/) and string literals ("..."). When it finds the range of a particular entity, it calls -addAttributes:range: on the NSTextStorage to annotate the characters with the appropriate color based on which entity we found. Note that this is how you modify any attributed string; it’s not specific to NSTextStorage. To make sure we keep up with the changes, we implement NSTextStorageDelegate protocol’s -textStorageDidProcessEditing: method. There’s an equivalent NSTextStorageDidProcessEditingNotification though I believe modifying the text storage is only allowed via the delegate method.

If you compile and run at this point, you’ll see that it pretty much works as advertised. We’re done, right? Not exactly. Try loading in a large-ish document. Around a couple thousand lines or so. Start typing really fast. It might depend on your machine, but you’ll find that the typing lags a bit. It seems that modifying the NSTextStorage is not quite as efficient as we’d like. Luckily, there’s a better way to do this.

Using Temporary Attributes

NSLayoutManager has a thing called temporary attributes. These are attributes just like attributes you use on an attributed string, except that they’re, well, temporary. They are more lightweight and designed specifically for cases like this, where the attributes are only used for display and aren’t something intrinsic to the document itself. It’s used for features like spellchecking and other places where you need to make temporary annotations to the text.

Imagine writing a rich text editor. You don’t want to make display-only changes to the NSTextStorage since those changes could end up being saved with the document. Sure, the code editor in this example doesn’t allow rich text but you can see where semantic division lies. The NSTextStorage is your “model” and you don’t want something like syntax highlighting, a “view” feature, to change the model.

In the example project, restore snapshot “Version 5″. This code isn’t much different. It’s adding attributes as before but instead of adding them to the NSTextStorage, they are added via the NSLayoutManager. Now, compile and run it and, if it wasn’t automatically restored from before, load up the large document you used earlier. You should find that it keeps up with your typing this time around.

Now you could stop here but I’d like to take it a step further. You have this nice parser and all. Surely, the ability to pick out different entities in the document is useful for more than just syntax coloring. Maybe you want to add more smarts to your editor. Maybe triple clicks will select the whole comment or string literal instead of just a paragraph/line. Maybe you want to know where code blocks begin and end for code folding. Or maybe you want special tool tips when hovering over different entities.

You can ask the layout manager for the temporary attributes at any character position. Just check for the color that’s set there and deduce which entity (comment or string) it is. Suppose, though, that the user can change the colors in the preferences. If entities are set to be the same color, this method won’t work.

Creating Your Own Custom Attributes

There’s nothing saying that you can’t create and add your own attributes to an attributed string. Sure, only ones defined by Cocoa will be used by the system for display but let’s decide for the moment to annotate our text semantically. Instead of setting the NSForegroundColorAttributeName, we set our own custom attribute to indicate the type of element we’ve found.

Restore snapshot “Version 6″ in the example project. At the top of the NoodleSyntaxHighlighter implementation you’ll see that I’ve defined our own attribute (“noodleElementType”) and the two possible values (“comment” and “string”). The -parse: method is the same as before except instead of setting colors, we set our own custom attribute, marking regions of text as comments or strings.

That’s great but how do we get our colors? If you look at the NSLayoutManagerDelegate protocol, you’ll see a method called -layoutManager:shouldUseTemporaryAttributes:forDrawingToScreen:atCharacterIndex: effectiveRange:. What this method does is allow you to substitute different attributes for the ones that are there already. In our implementation, we check what element type we set during the parse phase, look up its color and return it as the NSForegroundColor. So, our NSTextStorage has semantic markup but when displayed, we can substitute in the proper display attributes to get it to color properly.

• • •

Now, there’s one thing I glossed over which is an obvious optimization. Notice how we re-parse the whole document on every change. When you get the text storage notification, you can query it for the range of characters that was changed via its -editedRange method. You can use this information to limit the range of the document you have to re-parse. What you can do is look at what element is there already (from your annotations from the last parse pass) and start parsing from the beginning of that element. You can even tighten it up further by limiting how far it has to parse. I leave implementing this as an exercise for the reader.

As you can see, doing the coloring part isn’t so bad though there are some slightly subtle details to keep in mind. The hard part is the parsing which I’ve conveniently (for myself) left for you to figure out on your own. Enjoy.

And in case you missed the link to the example project above: NoodleEdit.zip

Comment » | Cocoa, OS X, Programming, User Interface

Hazel 3 is out

March 5th, 2012 — 5:03pm

Actually, this is probably old news since this happened last Thursday, but I finally released Hazel 3. For those of you who don’t know what I do for a living might want to check it out. If anything, you’ll understand a good part of the reason why I haven’t posted here much in the past year or so.

To say I’ve been busy is an understatement but it seems the launch was a success. Ok, so the store was not quite working for the first hour and even after I got it up, there were all sorts of glitches. And nevermind that the links in one of my emails was wrong resulting in thousands of people emailing me asking me about it. And overlook the fact that there were quite a few instabilities in Hazel for people running on 32-bit that were missed in the beta. And it wasn’t all that fun when my bank froze my corporate debit card because it thought that all the charges I was making that day were possibly fraudulent. I can ignore all that because a bunch of people actually bought the result of my hard work and for that, I say thank you.

And also, as a heads up, I will be splitting this blog at some point in the not-too-distant future. I will be starting up a Noodlesoft/Hazel specific blog targeted towards my users which will have tutorials, tips and news while keeping Noodlings as my blog for much more developer oriented stuff. Keep your eyes posted here for updates on that.

 

Comment » | Business, Downloads, Hazel, Noodlesoft, Software

Hazel 3.0 beta

September 12th, 2011 — 12:07pm

After all the delays, dead ends, procrastination, wool gathering, futzing around and some actual hard work, Hazel 3 is finally open for beta testing. There’s no set duration for the beta period; it ships when it’s done.

If you’re feeling lucky, you can get the details from this forum article (you need to register for a forum account if you haven’t already). By the way, I hear Time Machine is pretty cool.

 

1 comment » | Business, Downloads, Hazel, Noodlesoft, OS X, Software

Hazel is 5!

September 5th, 2011 — 11:02pm

Five years ago today, I shipped Hazel 1.0.

Hazel started as a personal project that I wrote for my own use but over time I realized that this might be useful to others. Sometimes you just have to dive in. I quit my job to work on Hazel full-time and some months later, I finally shipped my 1.0. Sales were modest at first but over the years it’s paid off. It was a lot of work but it was worth it. If anything, I’ve clocked in more hours over the past few years working pantsless in my home office than over my entire career previous in various other offices.

This blog has been quiet for a while mainly because I’m still at it. Hazel 3 is nearing the testing stage (expect a beta release and more details soon). It’s a bit overdue; I was hoping to release before Duke Nukem Forever but then again, they had bit of a head start. With each new version, I feel like Hazel is fulfilling the vision I had when I first released it five years ago, and then some. Of course, I could never predict the new and interesting ways you have used Hazel over the years and hopefully you, the users, will help shape the product for many years to come.

Enough about the future. It’s Hazel’s birthday today and in celebration, you can get 20% off until midnight tonight (Tues, Sep. 6, Eastern time). Just use this link. Most of you reading this probably have a copy already but I’m sure you have a friend/relative/corporation with deep pockets that could use a copy (or twenty) so send the link along to them. Or buy an extra copy for yourself because you’re just crazy like that. And while you’re at it, have a drink on Hazel’s behalf, or even better, have a drink (or five) before you hit that order page. I hear you save more money have more fun that way.

8 comments » | Business, Hazel, Noodlesoft, Software

The Proper Care and Feeding of NSImage

April 15th, 2011 — 10:17am

[This was the topic of my presentation at the NYC Cocoaheads meeting last night. I thought it would be nice to also post on the topic here.]

[I've received email from Ken Ferry. See addendum at the bottom]

NSImage is a troublesome class. Over the years, it’s been misunderstood and abused. I think much of this is because of a lack of conceptual clarity in the docs and examples and the API itself can be confusing and misleading. Add to this having to mix with CGImage and CIImage and you can end up with a confused mess.

The way I like to think of NSImage is that it’s a semantic image. When you look at icons, they are made up of several versions at different resolutions. Technically, it’s 4 or 5 images but NSImage wraps those all up into one notion of an image. Semantically, it is an icon of, say, a house but underneath it’s made up of several actual images of a house. The main reason for these different versions is that some graphics do not scale well and it helps to have hand tuned versions, especially for the smaller sizes. You can also do things like omit certain features in the smaller sizes to simplify the graphics and make the icon more recognizable.

One key misunderstanding is the notion that NSImage has actual image data. While there are parts of the API that deal with a cached bitmap (more on this below), NSImage itself is not based on image data. It is best to think of NSImage as a mediator between the image data in the various representations and the drawing context.

Structure3

Loading an image from a file and drawing it is usually straightforward. If there are multiple representations, NSImage will figure out the correct one to use when drawing it. Most of this is automatic and is an important key to understanding how to use NSImage. As you will see further on, bypassing this mechanism leads to many issues with size and resolution mismatches when processing NSImages.

I’ve created a little program to help illustrate the points in this article. Since it also includes a new reusable class, I’ve included it in my NoodleKit repo. I suggest checking it out. Just compile and run the ImageLab target (you may need to set the active executable in XCode in the project menu). The rest of this article will be referring to it so I recommend you run it while reading the rest.

When you launch the app, it will load the test image. It’s just a colored circle. Now resize the window. You’ll see that the color changes as you resize. The icon itself is made up of four different sized images and to make it clear which representation is being used, I made the circle different colors for each one. Whenever the color changes when you resize, NSImage is switching to a different representation based on the size. Here’s what the different reps look like in Preview:

original

Size is another confusing aspect of NSImage. One thing to remember is that size is not pixels. It represents a coordinate space. One way to think of it is that NSImage’s size is like NSView’s frame while NSImageRep’s size is like NSView’s bounds. For NSImage, its size is a suggested size to draw the image in the user coordinate space. If possible, it’s best to explicitly specify the destination rect.

Let’s take the case of drawing some custom graphics on top of an existing icon with several representations, like drawing a badge over your app icon. A common way to do this is to lock focus on the NSImage, do your drawing, unlock focus and then draw the resulting image. What -lockFocus actually does is allow you to draw into a cache. This has probably lead to a lot of the misunderstanding that NSImage holds image data. Unfortunately, there are issues with this approach. Mainly, because there is no context about where this image will be drawn so the resulting image is tied to a specific resolution. Also, you are editing a cache so none of this image data is reflected in the original image reps and actually, from poking around, it’s actually destructive in that it may end up removing any other representations.

In our case here we are modifying an icon with different sized representations, what we end up doing is locking the resulting image into the size that happened to be set on the original NSImage. In many cases, you may not know how and where this icon will be used but if any scaling is involved, you may end up having an icon with the wrong version being displayed. In the example program, select “Modified (lock focus)” in the pop-up. Here it picks the 64×64 version (as indicated by the green circle) which becomes a problem when you scale the image up as shown on the left in the image below.

lockFocus

The one on the right is a version which uses an  NSImageRep subclass (select “Modified (custom image rep)” in the pop-up). Notice that it picks the appropriate size as you resize the window. Why is this the case? It’s because NSImage doesn’t ask the image rep to draw until the NSImage itself is drawn. At that point, you actually have information about the destination, including how big it will actually appear on screen. NSImage is able to use this context to determine the right match between different sized images and the resolution of the output context. The same goes for any drawing code.

Subclassing NSImageRep is quite easy; you just need to override the -draw method. I’ve made it even easier by providing NoodleCustomImageRep which takes a block, allowing you to create images without creating a new subclass.

Say, though, you are drawing an image that should scale without pixellation, like drawing a square. Surely, you can just lock focus on an NSImage and draw a square and scale that as needed? Well, Mr Smarty Pants, take a look at the example program. Select “Drawn (lock focus)” in the pop-up.

square

 

Here, you’ll see odd fuzziness around the edges. What’s going on here? It’s not anti-aliasing but instead the graphics context where the image is being drawn has image interpolation turned on. As a result, the AppKit is trying to interpolate the image from it’s original size to a much bigger size.

How do you fix this? You could try turning off image interpolation in the destination graphics context but this isn’t always possible or desirable. The better solution is just like before: use a custom image rep to do the drawing (select “Drawn (custom image rep)” from the pop-up). Since the drawing occurs at drawing time, instead of image creation time, it knows about the context it is drawing into and therefore can provide your drawing code with a context at the correct resolution. The crisp square on the right speaks for itself.

Let’s take another example. Say we want to take an existing image and run a Core Image filter on it. Somehow you have to convert your NSImage into a CIImage. This usually entails a game of connecting up various methods that fit together until you got a CIImage. A common way to do this is:

ciImage = [CIImage imageWithData:[nsImage TIFFRepresentation]];

This dumps the whole image (all of its representations) into TIFF format which is in turn, re-parsed back into data. Now, CIImage is pixel based so it ends up picking one of the representations. It’s not documented which representation is picked since there’s no way to specify the context where it will be drawn so there’s a chance it won’t be the right one. Select “Core Image (TIFF Rep)” in the pop-up and you get something like the image on the left (or maybe you won’t; read on):

coreImage

Now, it doesn’t look too bad here. It took the highest res representation and used that. That said, technically, it’s not correct. The version on the right (select “Core Image (custom rep)”) shows the correct image rep being used. Also, my experience has shown that the representation chosen by the -imageWithData: method can be different on different hardware and that it ignores the size set on the original NSImage so you may not be so lucky depending on what machine your code runs on.

Fortunately, Snow Leopard introduced a new method: -CGImageForProposedRect:context:hints:. As mentioned before, when you draw an NSImage into a context, it will automatically pick the right representation for that context. This method does basically the same thing but without the drawing part. Instead it returns a CGImage which you can then use to create your CIImage:

cgImage = [nsImage CGImageForProposedRect:&rect context:[NSGraphicsContext currentContext] hints:nil];
ciImage = [CIImage imageWithCGImage:cgImage];

Keep in mind, though, that it the image returned might not be the exact size you wanted. This becomes more of an issue when you are combining multiple images together in a CIImage pipeline and you need them all to be the same size. You can adjust for this by using CIImage’s -imageByApplyingTransform:.

In addition to the above method, Snow Leopard introduced -bestRepresentationForRect:context:hints: which does a similar thing but returns an image rep instead. Depending on your needs, you can use one or the other to tap NSImage’s image matching logic.

Finally, a note about performance. NSImage does keep a cache based on the drawing context. This helps for when you repeatedly draw the same image at the same size over and over. If you end up sharing an NSImage across different contexts, you’ll find that you are defeating the cache. For these cases, you should be copying the NSImages. Remember that NSImages are just mediators between image data and drawing context. NSImageReps are the actual image sources and, starting with 10.6, reps like NSBitmapImageRep do copy-on-write making it inexpensive to copy NSImages and their reps.

In the example app, there’s a field which shows the time taken to display the image. You’ll notice that when you resize the window, the cases which use a custom image rep are slower as it has to recache whereas the lockFocus cases don’t since the image is static. If this becomes an issue, you can turn off caching or use a fixed resolution image during a live resize. Another more subtle piece of business is performance when drawing from the cache. If you click the “Redisplay” button, it will cause the image to be displayed again. Since you aren’t changing the size, the cached version can be used. Notice how the versions using the custom image rep are usually a smidgen faster than the lockFocus versions. I suspect what is happening is when you lockFocus on the image, you lock the cache into a specific version and size. As a result, if you are drawing at any other size, it has to scale the cached image every time. With a custom image rep, it’s cached at the exact size so the cache can be used as is.

What are the lessons here?

  1. When defining your image, you should use an NSImageRep subclass and override -draw. If you don’t want to create a whole subclass just to create an image, use my NoodleCustomImageRep (included in NoodleKit) which allows you to pass in a drawing block. Using an image rep gives your drawing code better contextual information than you would get just drawing in a -lockFocus.
  2. If you follow point #1, then you can let NSImage make the decision of which representation to use. Use  one of the drawing methods, -CGImageForProposeRect:... or -bestRepresentationForRect:... and you’ll get the best sized representation for the job. Do not assume, though, that this representation will be the actual size you want. When drawing, it also helps to specify the rect to draw into.
  3. Avoid using -lockFocus. It doesn’t produce the correct image in different contexts and can be destructive in terms of kicking out the other reps in the NSImage. While still ok in specific circumstances, you have to know what you are doing.
  4. If using the same NSImage in different contexts, copy it. In 10.6 onwards, this is an inexpensive operation as bitmap data is copy-on-write. Copying NSImages is is also a good idea in case some decides to use -lockFocus on the image (see #3).

If you have access to it, I highly recommend watching the video for Ken Ferry’s session at WWDC 2009: Session 111: NSImage in Snow Leopard (you may need a developer account to view/download it). Much of this is derived from that presentation and it has even more interesting bits about NSImage than what I’ve presented here.

And in case you missed it above, you can find NoodleKit (which contains NoodleCustomImageRep as well as the example program) here.

Addendum (Apr. 18, 2011):

I received an email from Ken Ferry himself pointing out a couple things. Mainly, that as of 10.6, lock focusing doesn’t draw into the cache anymore. It creates a representation whose context is suited for the main display. It is still good for images which are meant for that context. Also as the NSImage context will maintain any size-to-pixel ratio that is on the main display, it can still be used in this situation should resolution independence come into play. It’s not much different than before in this respect but it’s not as volatile as a cache, which I believe is the key point here.

That said, all the caveats above about using -lockFocus still hold true. It’s not so great for when the image needs to be scaled or if you have representations you want to keep (it does remove all other reps when you lock focus). Also, because the representation is tied to the machine, it’s not very suitable for persisting.

Comment » | Cocoa, Downloads, Icons, OS X, Programming, Quartz, User Interface

Life with the MacBook Air

January 4th, 2011 — 12:13pm

Last month, I picked up a MacBook Air. I had only gotten my 13″  MacBook Pro a year previous and it’s a bit uncharacteristic of me to pick up a new machine so quickly afterwards. Before the MBP, I had a PowerBook 12″ which should give you a sense of how long I tend to stick to a machine before upgrading.

I had been eyeing the MacBook Airs ever since they were introduced. Now, keep in mind that for how I work, a portable machine is a secondary machine. I’ve always have some sort of heavy iron desktop machine that I use the lion’s share of the time. The laptop is mostly used for travel. Therefore, portability is a primary concern. I don’t want it too heavy or too big.

I also don’t need tons of storage. In fact, I try and store as little unique data on my laptop as possible. Because of the increased potential for laptops to be lost or stolen, I consider the copy of the data on it to be expendable. I make sure that important data is either centralized or replicated somewhere else. I use FileVault to keep the data from prying eyes. Code is checked in to a version controlled repository somewhere else. Email is all via IMAP. I use MobileMe to sync. Music I have on my iPhone or I stream from my desktop machine. The point is that if I lose the machine, all I lose is the hardware. As a result of all this, I find that I don’t need all that much storage space nor the potential to swap a new drive for more space. My laptop only needs enough space my “working set”.

The other priority in a laptop is some level of performance. I don’t need the fastest Mac but I do need something that doesn’t get in the way of doing work. As a result, a year ago, I ended up picking the MacBook Pro over the Air as, at the time, it didn’t seem as if the Air was beefy enough for my needs as a developer.

That changed with the newest round of Airs. After getting some early adopters of the new MBAs to do some compile tests for me, it appears as if the top of the line MBA 13″ now compiles as fast as or faster than my MBP from last year. The combination of the now standard SSD, the upgraded CPU, and memory option of 4 gigs make it competitive with the MBP 13″.

Note that I didn’t consider the 11″. 12″ is probably the smallest screen I can tolerate. Also, the higher performance options are only available on the 13″.

Overall, the Air feels snappy. The machine feels faster than what its specs would indicate. Development is great. Compiles are sufficiently zippy (though still can’t touch my Mac Pro). Recently, I did a presentation at my local Cocoaheads meeting. A good bit of it was using Instruments and at the end, people were surprised that it was done on the Air.

Traveling around with it has been a breeze. While, it’s not so light that it feels like nothing (it still feels substantial, which is not such a bad thing), it does feel light enough that I don’t feel the fatigue that I’ve felt carrying around my MBP for long stretches. I haven’t had the opportunity to have it around for a full day at a conference yet but I look forward to carrying something lighter this time around.

A pleasant surprise was the screen. I was never a fan of the glossy screen and I’m happy to report that the one on the Air, while not matte, is not as reflective. Another nice bit is that audio is now transmitted via the mini-displayport output allowing you to transmit audio and video via a single HDMI cable to your TV. Note that you need an adapter that supports transmitting the audio as not all of them do. I know this is in all the recent Apple laptops but I thought it was worth pointing out that this feature is still continued in the Airs.

Now, it’s not all roses. If your CPU is under load for more than a couple minutes, the fans will kick in. I do notice the fans rev up more than on my MBP. Usual culprits are Flash and games. Fortunately, it usualy doesn’t happen when doing builds since most of my compiles are incremental. Actually, I can do a full debug build of Hazel without the fans going off though a release build definitely triggers them. That said, a month in, it feels like it’s happening much less, as if the machine were broken in. It’s probably my imagination or maybe it’s me becoming more accustomed to the fan noise to the point where I don’t notice it.

One annoyance for me is that the power button is now a regular keyboard key located in the upper right corner.

keys.png

The Air retains the eject button for the rare case where you bought the external optical disk drive and have it hooked up. As a result of all this, this screws up my muscle memory when changing the volume since I’m used to the volume keys being just one key in from the right, not two. If they insist on keeping the eject key, then I’d prefer the power button off the keyboard, seeing as how I almost never turn the laptop off. And unfortunately, it will be hard to adjust to since it is different than every other keyboard I currently use (which are all made by Apple, by the way).

All these nitpicks are just that, nitpicks. In no uncertain terms, I love this machine. It fits what I need it for. I consider it an upgrade from my MBP. It’s faster (SSD + better GPU). It’s lighter (by over 1½ pounds). It has a higher res screen. Albeit, yes, I could have gotten a current generation MBP with an SSD but I’m more than willing to shed the pounds for last year’s performance levels. It feels quick and agile both digitally and physically. In many respects, this is the laptop I’ve always been waiting for.

And if anyone is interested in a 2009 MacBook Pro 13″, drop me a line. I’ll throw in a free family pack of Hazel if you contact me before I put it up on eBay.

 

4 comments » | Hardware

Fun with Glue

July 5th, 2010 — 11:41am

In my last post, I introduced a little adapter class called NoodleGlue. It’s a simple class that wraps around a block allowing you to use it as the target and action for APIs that require it. It also optionally can take an extra block that is executed when the glue object is deallocated. This allows you to do any cleanup. In my NSTimer category from last time, I used this cleanup ability to allow the object to unregister from any notifications. Being a category, I couldn’t hook into the -invalidate or -dealloc methods of NSTimer so I created a glue object with a clean up block which I then set as an associated object on the timer. As a result, when the timer was deallocated, so would my glue object and my cleanup block would be triggered.

In more general terms, what does this mean? With my glue object plus the associative references feature, you can inject your code into any object’s deallocation sequence. It’s like being able to be notified if any object is freed. I’ve wrapped this up in an NSObject category with a new method, -addCleanupBlock:, which allows you to add a block to be invoked when the object is deallocated. It’s probably more of a proof of concept than something you’d want to use in production code but you can find it in the latest version of NoodleKit (it’s tacked onto the end of the NoodleGlue class).

Now, this sounds pretty cool on paper but this may be one of those cases of a solution looking for a problem. Not to be deterred, I’ve come up with some contrived cases that might actually be useful.

Add extra cleanup for extra stuff you attached to an object

It’s how I used it in my NSTimer category. You can unregister notifications and invalidate objects which need more than just a -release call.

Zero out references

One of the cool things about weak references in a GC environment is that they nil themselves out when the object is collected. You can simulate this behavior when using retain/release by setting up a cleanup block to nil out your reference to the object in question.

Tracing/debugging deallocations

Sure, you can do this in the debugger or Instruments but in some cases you want to trigger more involved code that you can’t do in gdb. Also, in a multithreaded program where the timing affects whether the bug shows up or not, this allows you to track deallocations without setting a breakpoint in the debugger which might otherwise change the timing.

Test if objects are actually deallocated (unit tests)

Credit goes to Guy English for this one. If you need to do a unit test where you want to test your memory management, you can “do the glue” to test that objects actually get deallocated. The problem here is if it’s not working (i.e. object doesn’t get deallocated), your block won’t get called so any assertions you put in there will just silently not happen. So, have your block set a variable and at the end of the test, do an STAssertTrue() on the variable.

• • •

Keep in mind that there are a couple of important caveats:

  1. You cannot retain the object you are “observing” in the block itself as this will create a retain loop and the observed object (as well as the glue) will never be dealloc’ed. Note that just referencing an object retains it unless you assign the object to a variable declared with __block. This has already been done for you in my code as the cleanup block is sent a non-retaining reference to the object being freed so use that reference instead.
  2. 
    	[self addCleanupBlock:
    		^(id object)
    		{
    			// Do not reference "self"
    			[object doSomething];
    		}];
    
  3. It’s not defined when in the deallocation process the associated object is freed. As a result, you can’t rely on any state of the observed object as it may be the case that all its ivars have already been released. But, you can have the block “capture” variables and the block will retain and subsequently release them. For instance, if you want to log the “name” of an object as it’s deallocated, you can reference the name in the block and it will be retained. But be careful though, if you reference an ivar in the object directly, you end up retaining that object. So, if the observed object is creating the block and you access its ivar in that block, you end up implicitly retaining the observed object, which is in violation of point 1 above. In those cases, assign the ivar to a local variable and use that (in this case, do not use __block in the variable declaration as you want the block to retain the object.
  4. 
    	NSString	*localName = _name;	// _name is an ivar
    	[self addCleanupBlock:
    		^(id object)
    		{
    			// Do not reference _name here as that will implicitly retain self
    			NSLog(@"Object %@ has been deallocated.", localName);
    		}];
    

I suggest reading up on the memory management issues with blocks as described here. Joachim Bengtsson has a great article on blocks (that link goes to the section relevant to this article but I recommend reading the whole thing) and there is always Mike Ash’s reliable series of articles on the subject, which I have consulted many a time myself.

Comment » | Cocoa, Debugging, Downloads, OS X, Programming

Playing with NSTimer

July 1st, 2010 — 2:51pm

It’s been a long while. Part of it is that I’ve been busy working on Hazel 3. And since I’m not interested in writing a book, I’m finding it hard to be motivated to keep posting here. Nonetheless, every so often I have to let something squirt out.

Today, we are going to talk about NSTimer. One thing you may or may not have noticed is that in certain cases where time is suspended (putting your machine to sleep) or altered (changing the clock), NSTimer has a tendency to try and compensate. For instance, say you set a timer to fire in an hour. You close the lid on your MBP and come back 30 minutes later. You’ll see that the fire date for the timer has adjusted to take into account the time it was asleep. Here’s a quickie diagram for those not quite following:

timer-timeline.png

Now, this is great if you wanted a timer to fire an hour later in the machine’s conception of time. But there are times when you want it to fire on the actual date you set on it. Typical example is setting a timer for a calendar appointment. That time is an absolute point in the user’s timeline and you don’t want that timer to shift to compensate for the machine’s timeline.
The basic fix here is when one of these time-altering events occurs, you reset the fire time on the timer to the original time that was set on it when it was created. Luckily, there are notifications for when the machine wakes from sleep as well as when the system clock is changed (this one being new in 10.6). Naturally, one would think to make an NSTimer subclass as the most straightforward way to do this. Unfortunately, it seems that it doesn’t work. Even if you get over the hurdle that NSTimer is actually abstract (like a class cluster), you have to add the timer to the run loop to schedule it. While NSRunLoop seems to accept the NSTimer subclass, it doesn’t ever fire. I don’t know if NSRunLoop is looking for the NSTimer class specifically or if it’s calling some private method that I need to override. If someone else has had any luck doing this, drop me a line.

The alternative is a category. The problem is that we need at least one ivar to store the original fire date. Normally, you can’t add ivars in a category but in this glorious post-10.6 age, we can, via associative references. It’s basically just a dictionary where you can store variables associated with an object. Sure, you could have implemented that yourself but the nice thing is that it handles memory management as well. When the object is dealloc’ed/collected, depending on the memory management you specify, it can also release/collect any associated objects.

So, NSTimer category it is. It supplies the method (among others) +scheduledTimerWithAbsoluteFireDate:target:selector:userInfo: which creates and returns an autoreleased and scheduled timer that will fire on the given date regardless of any time fluctuations/lapses. If all of a sudden the fire date has passed (like if the machine was asleep during the fire date or if the system clock was set ahead), it will fire immediately. Note that calling -setFireDate: won’t quite work in the sense that if the timer ever has to be reset from a time shift, it will be set to the date used when creating the timer, not the one that you set it to afterwards. This is an implementation limitation as I am using a category and can’t override methods (at least not without doing some method-swizzling which is not something I want to deal with). In cases where you need to change the fire date, I’d suggest just creating a new NSTimer.

I’ve also included a test harness where you can see it in action compared against a timer running in normal (non-absolute) mode. When run normally, both timers should fire at the same time but you can test things out by changing the system clock or putting your machine to sleep. You’ll see that the regular timer’s fire date will start to drift out while the “absolute” timer will stick with the time originally set.

I also went ahead and added methods to have NSTimer to use blocks instead of a target/selector combo. Note sure why this wasn’t in there already but I thought it’d be useful.

And that’s not all! Included is my NoodleGlue class. It’s a simple little class that just wraps a block (plus another block for cleanup, if you need it). It’s useful for cases where you want to set a target and selector for some object to use but don’t want to create a new class or method for it. Check out the source code for the NSTimer category to see how its used both with NSTimer (to implement the block API) as well as NSNotificationCenter. In the latter case, there is a blocks-based API but I had special memory management requirements that couldn’t be done with the existing API.

Needless to say, this extension only works on 10.6+. It’s in the latest version of NoodleKit and as a result I’ve made NoodleKit require 10.6 as well. Just build and run the TimerLab target. Fixes and suggestions welcome.

[Update 8:35pm EDT] I neglected to push the code to the github repository. Sorry about that. Everything should be there now.

7 comments » | Cocoa, Downloads, OS X, Programming

Back to top