Hazel 2.3 is out

July 13th, 2009 — 1:16pm

Seems like forever since I’ve done a release. There have been odd setbacks here and there but Hazel 2.3 is out at last.

For those that share your machine with others, you’ll find that the App Sweep feature now allows everyone on a machine to have their support files thrown away when someone throws away an app. Everyone that wants to participate has to enable the feature (look in the “Trash” pane). I would have made this not an option but I had visions of people complaining about others being able to see what apps they are throwing away. So, now there’s a checkbox which personally I’d rather not have.

The feature I’m most excited about is, of course, the one that most of my users probably won’t care about or notice. But, this being a developer blog, I’m sure some of you would be interested. In version 2.2, I added an embedded script editor. The AppleScript editor had syntax highlighting because, well, Apple gives that to you. In 2.3, I’ve added syntax highlighting to the regular shell script editor. Of course, it’s not just for shell scripts. Just set the interpreter you want and if it’s one that Hazel knows about (currently, it’s bash, Perl, Python, Ruby and awk), then Hazel will color your code appropriately. Give it a spin and let me know what you think.

Oh, and I didn’t add any UI to configure the highlight colors. If you really hate the color scheme it uses, you can take your XCode color theme and copy/symbolically link it as ~/Library/Application Support/Hazel/Hazel.xccolortheme. I thought about making Hazel just use whatever is set in XCode but I got the feeling that people weren’t using XCode to edit their non-Objective-C stuff. I may change my mind on this in a future release. Feedback welcome.

And of course, there are a bunch of other new features, improvements, and fixes (and probably bugs).

As for the roadmap, there will definitely be another (free) update of some sort before 3.0. Snow Leopard is coming out in a couple months and I need to make Hazel compatible. If you are running Snow Leopard and would like to test, definitely drop me a line.

Comment » | Hazel, Noodlesoft, Software

Fun with KVC

June 30th, 2009 — 10:33am

[Warning: this article is for entertainment purposes only. Any harm you do to your code or any offense you incur on the sensibilities of other programmers is your responsibility.]

Some time ago, Guy English posted a great and twisted article on key value coding (KVC). He comes up with a nice hack to do stuff with KVC that was never intended. Of course, there are some neat ways you can use KVC without indulging in the craziness that Guy English delves into (and I think he’d be the first to admit that it’s not meant for primetime). The main point remains, which is that using KVC with NSArrays is a great way to avoid coding loops over their contents. You have an array of “Person” objects and want an array of their names (accessible via a -name method)? Just do…

newArray = [array valueForKey:@"name"];

Now, the keys do not necessarily have to be properties of the object. There’s nothing stopping you from calling other methods on your objects that return other objects, provided they don’t require any arguments. For instance, if starting out with an array of strings…

newArray = [array valueForKey:@"lowercaseString"]

…will generate a new array of lowercase versions of the original strings. Actually, the methods don’t even have to return objects. Want to convert an array of numbers in string form to numbers? Try the following:


array = [NSArray arrayWithObjects:@"1", @"2", @"3", @"4", nil];

newArray = [array valueForKey:@"intValue"];

This gives you an array of NSNumbers. KVC auto-boxes scalar types as needed. It’s a great way to do conversions/transformations of arrays of objects in one fell swoop.

Suppose you want to do a deep(er) copy of an array of objects (not only copy the array, but each of its objects as well):

newArray = [array valueForKey:@"copy"];

There is the issue, though, that the copies in the array now have a retain count of 1 and will be leaked unless extra care is taken. But no need write a loop to autorelease the objects. Instead, we can use keypaths to create chains:

newArray = [array valueForKeyPath:@"copy.autorelease"];

Retain it now and release it later or just let it get autoreleased. Either way, everything cleans up nicely.

And if you have a bunch of NSMutableCopying-conformant, immutable objects and want to convert them to their mutable counterparts:

newArray = [array valueForKeyPath:@"mutableCopy.autorelease"];

That gives us an array of autoreleased mutable copies.

Sure, not super mind-blowing but hopefully something above may save you some keystrokes. If you have your own KVC tricks, post them here in the comments.

[Update 2009-06-30]

This all started out with an conversation I had with Guy about wacky KVC stuff. The original title of this post was “KVC Abuse” but after a few edits, it got lost. I added the warning at the top but I’ll make it clear here: this is an abuse of KVC. This whole post is an indulgence.

I avoided a whole discussion of implementing HOM (Higher Order Messaging) type functionality in Objective-C but here’s a method you can use in an NSArray category:

- (NSArray *)map:(SEL)selector
{
	NSMutableArray		*result;
	
	result = [NSMutableArray arrayWithCapacity:[self count]];
	for (id object in self)
	{
		[result addObject:[object performSelector:selector]];
	}
	return result;
}


You don’t get the auto-boxing or the keypath stuff but the end result is still succinct and convenient for the basic case:


newArray = [array map:@selector(lowercaseString)];

Of course, my own version that I use has a more verbose method name (-transformedObjectsUsingSelector:) but no matter how you slice it, it will generate less bile from other developers.

6 comments » | Cocoa, OS X, Programming

413 days

June 19th, 2009 — 1:42pm

No, not the number of days since I last posted, though yes, it’s been a while. I just happened to be looking at my server stats and noticed that I had an uptime of 413 days. I guess this post would have been more timely and poetic at the 1 year mark but I have to say that I’m pretty impressed with Slicehost (warning: it’s affiliate link so if you sign up using that link, I get some credit). The reboot of my slice way back when was when I upgraded to a bigger slice.

I’m sure other people on other providers can post similar numbers but seeing as I had come from DreamHost, I find it pretty amazing. And yes, that is an affiliate link as well as I still use them for other things – I can be a whore at times, too.

Looking back over the past 413 days, I only recall contacting Slicehost support once, and that was for an administrative issue. I do remember some network problems once but those were resolved within minutes. By the time I asked around in the IRC channel about it, it was fixed. For the most part, I almost never think of Slicehost. The fact that I can take them for granted says something about their reliability.

How have the other services and tools I’ve been using on my site fared during this time?

PotionStore has been great. And now that it has an in-app Cocoa store component, it’s even better. I’ve currently integrated it into the latest Hazel beta (forum account required) if you want to see it in action. Just keep in mind that while the app is beta, it is connected to the live store so all sales are real.

Between the two main transaction processors I use, PayPal has been far better than Google Checkout. Very few issues with the former (knock on wood). Unfortunately, when there has been an issue with Google Checkout, I’ve had to hunt to find a way to even contact them and then the email support has been pretty crummy. On the flip side, I can find PayPal’s phone number quickly and their support people seem very knowledgeable and when the call is over, the issue is resolved. Fortunately, Google Checkout accounts for a small number of sales.

For server monitoring, I have been using Montastic. At least, I thought I was. Recently I checked my account only to notice that it wasn’t really monitoring. After unwedging it, it seemed to not like my store certificate, bugging me with alerts regularly. It also seemed to be sending MIME mail of some sort which end up as MMS on my phone. They’ve got pics of my server on fire or something? Annoying and potentially expensive. I’ve disabled it so suggestions for an alternate server monitoring service are welcome.

I could end this post with “Here’s to another 413 days” but I know I have to do a server upgrade at some point which will break my streak. Nonetheless, it’s good to know that downtime occurs on my terms and not my provider’s.

4 comments » | System Administration

Understanding Flipped Coordinate Systems

February 2nd, 2009 — 12:25pm

Flipped coordinate systems always seem to confuse people, myself included. After working things out with pen and paper or handpuppets, I always feel like they’re not quite correct so I thought I’d resolve things here. If anything, I can refer back to it whenever I forget this stuff (which happens every couple years). First a refresher in case you need it: Cocoa Drawing Guide: Flipped Coordinate Systems.

The main thing to keep in mind is that while a flipped coordinate system is basically translating the origin to the top left and flipping things vertically, semantically it doesn’t mean that everything is upside-down. Images, text and other elements are supposed to render right-side-up. The flipped coordinates should affect where those elements are placed, not how they are rendered. Being flipped is a higher level notion and is separate from the
CTM. While setting something as flipped will flip the CTM, modifying the CTM without setting the graphics context as flipped results in everything being entirely upside-down, which is different. Many of the Cocoa classes and functions take flipped coordinate systems into account and will draw right-side-up for you but it gets tricky with images.

There are two sets of methods in NSImage to render it in a graphics context: the -composite... methods and the -draw... methods. I’ve put together an interactive example app to help illustrate the differences. It might help you to download it now so you can follow along. The figures included in this article are from that app

The -composite... methods actually seem to work in the sense that the image is always right-side-up. The problem is that it draws “upward” from the draw point regardless of the coordinate system. The result is you have to calculate the compositing point at the upper left corner of the image instead of lower left. This is the result of these methods not taking the CTM into account in terms of scaling and rotation. That also means that if you scale your view up, the images won’t scale with it. While you can use this to your advantage in some cases (like rendering resize handles or labels which you want to stay the same size regardless of zoom), it’s usually not the right way to do it.

composite-scaled.png

View scaled at 2x. Notice how the composited version does not scale.

The -draw... methods on the other hand follow the CTM properly. That also means that if the view is flipped, so is the image, so you need to adjust accordingly. So while these methods obey the CTM, they do not take into account the flipped flag of the context which would be the cue to draw things right-side-up.

draw-flipped.png

The draw... routines follow the CTM, but not the flipped flag.

Now, to complicate things even further, NSImage’s themselves can be flipped. The reason for this isn’t to make the images upside-down. It’s there to provide a flipped coordinate system for whenever you draw into it (i.e. lockFocus on it, draw, then unlockFocus). It’s useful for when you want to tell a flipped view to draw into an NSImage, for instance. It’s basically like a flipped view; you don’t expect a flipped view to draw upside down, no matter which view you set as its superview. A subtlety to be aware of is that flipping an image loaded from a bitmap does not make too much conceptual sense (you are changing the coordinate system after the content has already been “drawn”) but it does have the practical effect of flipping the image vertically, which seems to be an implementation detail. Yes, flipping the image will work to “correct” orientation problems in most cases but, depending on where your image gets its data (for instance if it has an NSCustomImageRep like the example app – see below) and whatever implementation-specific details lurk in NSImage, you may end up with undesired or inconsistent results.

As mentioned above, I’ve put together a little interactive example app (Leopard-only) to show how the different methods behave. In addition, I’ve written methods in an NSImage category (-drawAdjusted...) which will render the image correctly regardless of the flipped status of the coordinate system it draws into. As suggested in Apple’s docs, it does a transform reversing the flip, draws the image, then reverts the coordinate system back.

The image itself is drawn by code, not loaded from a bitmap. The reason for this is that I also wanted to illustrate using flipped images. It draws an arrow and some text to indicate the proper orientation. When flipped, the drawing code is exactly the same in that no coordinates are recalculated. The text content is changed to compensate for the new orientation. Notice how the text renders right side up no matter the flip state of the image; an indication that the NSString drawing methods are “flip-aware.” Also, it shows how to check the graphics context to get its flipped status so you can make your own drawing routines flip-aware as well.

Unfortunately, not everyone draws flipped images correctly. One place in particular is NSView’s/NSWindow’s -dragImage:at:offset:event:pasteboard:source:slideBack: method which will draw flipped images upside-down. Since you can’t control how the image is drawn, you can instead draw your flipped image into another non-flipped image and pass that in. I’ve added a method to the NSImage category to do this and you can check out the result in the example app (you can drag the image out of the views though only the last one has the corrected version).

And what if you actually want to draw everything upside-down, you irrepressible nut? Well, apply your own transforms using NSAffineTransform or CGAffineTransform. Just remember to concat the transform and not set it (a good general rule when using affine transforms). As long as you don’t tell any classes that you’re flipped, it should work out.

Hopefully this didn’t make things even more confusing (and also, hopefully, my interpretation of all this is correct). If you are still lost then just follow these rules:

  1. Do not set images as flipped unless you know what you are doing.
  2. Use the -drawAdjusted... methods in my category (or similar technique) to do all your image drawing.
  3. If you didn’t listen to rule #1 and you have a flipped image and it is showing up upside-down even when following rule #2, then use the -unflippedImage method in my category to get an unflipped version of the image and use that instead.
  4. Never go in against a Sicilian when death is on the line.

That should handle most cases you run into. And trust me on rule #4.

7 comments » | Cocoa, Downloads, OS X, Programming, Quartz

The Invisible Interface: Stealing Prefs

January 27th, 2009 — 1:52pm

In this installment of The Invisible Interface, we are going to look at stealing preferences. What is stealing preferences? Simply enough, it’s using the preferences of some other app instead of having your own for a particular feature. The point of this is to avoid having to provide a separate interface for settings when the user has already made their choices known somewhere else.

The key here is finding cases where your functionality is more centrally used somewhere else. I’m going to use Hazel as an example but hopefully these will illustrate the point well enough for you to look for where you can apply it yourself.

Spring-Loaded Folders

For those that don’t know, Finder has a feature called spring-loaded folders. What this does is when you drag a file over a folder, after a delay, it will flash and then open that folder so you can drill deeper. It allows you to navigate the folder tree without having to let go of the file you are dragging.

Hazel implements spring-loaded folders as well. It works when you want to move/copy rules in between folders. Hovering the dragged rule over a folder in the list on the left will cause the view to switch to the rules for the hovered-over folder allowing you to drag back into the rule list and place the rule where you want it. [On a side note, implementing this uncovered a bug in NSTableView (at least on Tiger; have not checked with Leopard) resulting in me doing my own implementation of NSTableView’s drag and drop.]

Under Finder’s “General” tab, you’ll see a checkbox and slider to configure these settings. Your first thought may be to provide a similar UI in your app. But why? Does the user really care about tweaking this for each app it appears in? It seems like whatever setting works in Finder will be fine wherever else it is used so why not just use Finder’s preference?

Commonly Used Folders

In Hazel, when you specify a destination folder for some actions (like move and copy), there is a pop-up of folders. You’ll notice that there’s a list of common folders at the end of the pop-up menu. If you look a bit closer, you may notice that these are the same folders in the sidebar of your Finder windows. Those folders are common destinations for files so it’s a good list for Hazel to use. By grabbing that list from Finder, Hazel avoids any sort of extra maintenance/interface for managing that list.

AppleScript Editor

In 2.2, Hazel introduced inline editing of scripts. You are provided with a mini-AppleScript editor right in Hazel’s rule interface. Now, there are potentially different things you can tweak to make the editor suit your needs, such as line wrapping, tab widths and whether to use the script assistant. But if you poke around Hazel’s UI, you’ll see that there’s no interface to set these. That’s because Hazel steals these preferences from Script Editor. If someone is serious enough about editing AppleScript that they care about these settings, there’s a good chance they have Script Editor installed and already set these preferences. By using its preferences, there is a consistency of user experience between the two editors.

• • •

Of course, you can’t do this everywhere. It’s best suited when the functionality is primarily used elsewhere and you are echoing it in your own app. The apps Apple ships with the system are an easy mark since you can usually rely on them being installed and they tend to be the places where common functionality is defined. Overall, the result is a less tweaky and cluttered interface and a more seamless experience with the rest of the system.

Doing the Heist

You can grab other apps’ preferences using either CoreFoundation or Cocoa. With Cocoa, NSUserDefaults is your go-to guy. -persistentDomainForName: does what you want. Give it a bundle ID and in return, you get a dictionary of preferences. What would’ve made more sense is something like +userDefaultsForName: which would return an NSUserDefaults instance, but hey, it’s not like Apple is hiring me to do API design. With CoreFoundation, you can use CFPreferencesCopyAppValue() to pick individual preferences. Again, a bundle ID is needed.

And I can’t leave without placating the more pedantic among you that have to point out the potential dangers of doing this. Therefore, I must note that there is some risk in doing this as most apps do not document their preference settings and they can change at any time. Having default values of your own for these settings should minimize the risk, at least buying you time until you can re-work things to use the new schema. That said, if this is for some critical/primary functionality in your app, it might behoove you to have your own settings for it. As they say, invest only what you can afford to lose, or, to milk the stealing metaphor, don’t do the crime if you can’t do the time. And while we’re at it, just say “no” to drugs, kids.

Comment » | Cocoa, Hazel, OS X, Programming, User Interface

Maintenance: Shady Characters

January 13th, 2009 — 5:44pm

As you may or may not have noticed (more likely the latter), this blog was down for a chunk of the afternoon. I had to fix something, and, well, it took a bit longer than usual. You may have noticed that you’d see garbage characters like “ö” pop up in posts and comments. That’s because some time ago a WordPress upgrade changed the character encodings. I didn’t consider it a high priority issue and let it sit until now.

Following this article, I converted everything over only to realize that none of the actual characters were converted properly. Instead of trying to debug SQL scripts that could potentially destroy all my data, I went through and edited every character encoding screw-up by hand. It wasn’t so bad with my posts since I pretty much remember what I put in there. Fixing user comments was a different matter. Being on a perfectionist tear, I used the Wayback Machine to find the comments before I performed the fateful WP upgrade just to figure out if somebody used a smart quote or an em-dash. Fun.

Hopefully everything is back up and fixed. If you notice any other garbage characters floating around, please post here so I can fix it.

And yes, I’m overdue for a real post. All you have to do is ÃâπÀìâ,öå¢Ã,Å,ìãâπÃ∫, and I just might be compelled to write something.

1 comment » | Noodlesoft, System Administration, Web

Positional Sound in User Interfaces

October 23rd, 2008 — 3:50pm

Video games are on the forefront of what kinds of rich interactions people can have with computers. In the past decade, there’s been a push for more and more immersive virtual environments resulting in more advanced APIs and hardware to provide things such as super-fast 3D rendering. In recent years, OS X has leveraged these advances in the predominantly 2D world of user interfaces, often in brilliant ways as seen with QuartzGL, CoreAnimation and CoreImage.

In video games, it’s quite common to exploit stereo output or even better, surround sound, to provide positional audio cues. Just as graphics can simulate a 3D space, so can sounds be placed positionally in the same space. If you, super-genetically-modified-mutant-soldier, are running around on the virtual battlefield and there is some big-bad-alien-Nazi-demon-zombie dude shooting at you from the side, you will hear it coming from that direction and react accordingly. Directional audio cues can supplement visual cues or even supplant them if visual ones cannot be shown (i.e. something requiring attention outside your field of view).

On OS X, sound is used rather sparingly in the interface, which is probably a good thing. But for those cases where it’s use is warranted, why not take advantage of technology available? Just as animation can be used to guide the user’s focus, why not sound? OS X does ship with OpenAL, which is to sound what OpenGL is to graphics, providing a way to render sounds in a 3D space.

I’ve put together a quick proof of concept app (download link near the end of the article). Move the window around the screen and click the button to make a sound. Based on the window’s position, the sound will appear to come from the different sides, which, for the most part is left/right, most sound output systems not being designed to articulate things in the up/down direction. The program itself basically maps the window position to a point in the 3D sound space. Right now, it doesn’t really use the z-axis (the axis that goes into your screen) but conceivably you can do things like make the sound appear further away based on window ordering. Try using headphones if the effect is not as apparent using speakers.

There is a significant technical issue, though. You can’t really know the actual physical dimensions and layout of a user’s screens. In addition, the position of the speakers relative to the screens is also not known. While you can get screen resolutions and relative positions of the screens, these are mostly hints at the actual layout. In my demo program, it is assumed that the screens are relatively close to each other forming one gigantic screen. It is also assumed that the speakers produce a soundstage roughly centered on the primary display (the one with the menubar). It assumes a model like this (the circle is the user and the thin slabs are the monitors, from a top-down view):

screen-setup1.png

In reality, it’s probably more likely the user would have a setup like the following:

screen-setup2.png

But who knows, it could possibly be something like this:

screen-setup3.png

The point here is that the effectiveness of this is dependent on the user’s setup. A particular idealized model would have to be chosen that hopefully works well enough for most people. While pinpoint accuracy is not really feasible, it probably isn’t required either. Human hearing is imprecise, otherwise ventriloquists would never be able to pick up a paycheck. Just an indication of left, right or center is probably enough for these purposes.

Where would this be useful? Well, this all came up yesterday when I received an IM (via Adium). I had my IM windows split up across two screens so I had to scan around a bit to find out which window had the new message. Though the window was on the screen to the left, the audio alert made me look at the main screen since the sound was centered straight ahead. It would be great to see an idea like this implemented in Adium and I’ve filed a feature request with them for their consideration. It’s ticket #11292 so you don’t go and submit a duplicate request.

It would be interesting to see more use of this in user interfaces out there. I don’t want to encourage people to add sounds to their apps if they weren’t already using them but for those that are, it’s something to consider. Overall, the effect is quite subtle but with some tweaking, it can be quite effective.

The link to download the demo program is below. Sorry, no source is provided this time. The code is a hacked together mess of stuff copied and pasted from an Apple example as I have never used OpenAL before. This can probably also be implemented in CoreAudio by adjusting the balance between the channels. If you are considering implementing something like this, email me and I’d be happy to discuss details as long as they don’t involve audio APIs since, well, I don’t know them particularly well.

Download PositionalAudioAlertTest.zip (Leopard only)

Thanks to Mike Ashe and Chris Liscio for advice on CoreAudio, which I ended up not needing as Daniel Jalkut suggested I use OpenAL instead which made things easier.

5 comments » | Downloads, OS X, Software, User Interface

Displaying Line Numbers with NSTextView

October 5th, 2008 — 8:16pm

Yes, it’s free code time again. I’ve been neglecting the blog for some time so hopefully this will make up for it. Think of it as that conciliatory heart-shaped box of chocolates used as a sorry way to make up for forgetting about your birthday, after which, I go back to my old ways of sitting on the couch all day watching sports, ignoring you.

In version 2.2 of Hazel, I added mini AppleScript and shell script editors so that people could enter scripts inline without having to go to another program and saving it to an external file. I’ll admit, I didn’t set out to make an uber-editor since it was intended for small scripts. Nonetheless, a user recently pointed out that when a line wraps, it’s hard to tell if it’s a continuation of the previous line or a new one. One of his suggestions was putting line numbers in the left gutter. If you don’t know what I’m talking about, look at TextMate (the example he cited) or XCode (you need to turn it on in preferences). I thought it might be overkill for a script editor that will mostly be used for scripts less than ten lines long. I’m instead considering doing an indented margin for continuation lines. Less visual clutter and addresses the problem at hand.

Nonetheless, I was curious about implementing line numbers. Poking around, I found some tips on how to do it but it seemed like there were odd problems implying it wasn’t as straightforward as one would think. So, snatching some free time in between other things, I decided to tackle the problem.

I looked into subclassing NSRulerView. The problem is that NSRulerView assumes a linear and regular scale. Now, to make it clear, I am talking about numbering logical lines, not visual ones. If a line wraps, it still counts as one line even if it takes two or more visually. The scale is solely dependent on the layout of the text and can’t be computed from an equation. Despite these limitations, I went ahead and subclassed NSRulerView. If anything, NSScrollView knows how to tile it.

I had this notion that NSRulerView was a view that synced its dimensions with the document view of the scrollview. With a vertical ruler, I assumed it would be as tall as the document and the scroll view just scrolls it in tandem with the document. Not so. It’s only as tall as the scrollview. That means you have to translate the scale depending on the clipview’s bounds.

I added some marker support via an NSRulerMarker subclass that knows about line numbers. The line number view will draw the markers underneath the labels a la XCode (with the text inversed to white). The sample project uses another subclass which will toggle markers on mouse click. While NSRulerView usually delegates this to its client view it made more sense to just do it in a subclass of NSRulerView. You have to subclass something to get it to work and it made more sense to subclass the ruler view since the code to handle markers never interacts with anything in the client view anyways. Personally, I find it an odd design on Apple’s part and would have preferred a regular delegate.

The project is linked below. The main classes are NoodleLineNumberView and NoodleLineNumberMarker. Some notes:

  • To integrate: just create the line number view and set it as the vertical ruler. Make sure the document view of the scrollview is an NSTextView or subclass. Depending on the order of operations, you may have to set the client view of the ruler to the text view.
  • The view will expand it’s width to accommodate the widths of the labels as needed.
  • The included subclass (MarkerLineNumberView) shows how to deal with markers. It also shows how to use an NSCustomImageRep to do the drawing. This allows you to reset the size of the image and have the drawing adjust as needed (this happens if the line number view changes width because the line numbers gained an extra digit).
  • Note that markers are tied to numerical lines, not semantic ones. So, if you have a marker at line 50 and insert a new line at line 49, the marker will not shift to line 51 to point at the same line of text but will stay at line 50 pointing at whatever text is there now. Contrast with XCode where the markers move with insertions and deletions of lines (at least as best as it can). This is logic that you’ll have to supply yourself.

More details, including performance notes, can be found in the Read Me file included in the project.

I’m putting this out there because I’m probably not going to use it and it seems like a waste of some useful code. Also, my apologies to the user who asked for this feature. I feel like somewhat of a jerk going through the trouble of implementing the feature and not including it. It was more of a fun exercise on my part but I still feel it’s not suitable for Hazel. That said, I may consider adding it and having it available via a hidden default setting. Votes for or against are welcome.

In the meantime, you can use the code however you want. MIT license applies. Please send me any bug reports, suggestions and feedback.

Enjoy.

Download Line View Test.zip (version 0.4.1)

Update (Oct. 6, 2008): Uploaded version 0.3. Fixes bugs found by Jonathan Mitchell (see comments on this post). Also made line calculations lazy for better performance.

Update (Oct. 10, 2008): Uploaded version 0.4. Fixes bugs mentioned in the comments as well as adds methods to set different colors. There is a display bug that happens when linking against/running on 10.4. See the Read Me for details.

Update (Oct. 13, 2008): Uploaded version 0.4.1. Figured out the 10.4 display bug. Apparently, NSRulerView’s setRuleThickness: method doesn’t like non-integral values. Rounding up solves the problem. Thanks to this page for identifying the problem.

Update (Sep. 29, 2009): I have included this class in my NoodleKit repository so you should check there for future updates.

23 comments » | Cocoa, Downloads, OS X, Programming

The Invisible Interface

September 29th, 2008 — 1:22pm

This is something that I’ve thought about for some time so I thought I’d write a series on the topic of invisible interfaces. What is the invisible interface? When people think of a user interface, they think of something visual made up of windows and widgets. Even for a commandline program, it’s the arguments, the output and error messages. But what many people aren’t aware of are the choices the designer made and the logic the programmer codes that make decisions for you. An interface not only encompasses what the developer put into it, but also what the programmer specifically kept out. This benefits the user in a number of ways: a less cluttered interface, a simpler interaction paradigm and fewer steps to accomplish a task. Many of these things are too subtle to be noticed normally which is the beauty of it. Sometimes the best interface is the one you never know is there.

Let’s take for example the flush toilet. Yes, sorry if this example is a bit disgusting but I’ll try and keep it clean and it is a fitting example. Just bear with me here. So, where were we? Ah yes, the toilet. Simple interface. Push down on the lever, water is flushed down and it stops and refills the tank ready for the next flush. It doesn’t get much simpler than that (well, it can but more on that below). Notice how you don’t have to stop the flush. If the toilet is calibrated properly, it should have enough water to flush down whatever you may put in there.

Of course, from a performance/efficiency standpoint, it’s not optimal. You are flushing the same amount of water each time, whether there is liquid or solid matter to be disposed of. How does one work this new requirement (the need to save water) into the interface? In Europe (I have yet to see them stateside), there are toilets with a split button. Hit one side and a lesser amount of water is flushed whereas hitting the other side flushes down a full measure. There are usually markings to indicate 1 or 2 (one or two dots is what I’ve seen) so you can figure out which one is which. Now, the interface has become more complicated. Yes, in the grand scheme of things, it’s not rocket science, but humor me. Now a decision has been added. Do I hit the 1 or 2 button? The user is now required to give the device more information than they had to before. The question is, is complicating the interface worth the functional gain and also, is there a way to effect the same result without changing the interface at all?

How about auto-detecting the amount of water needed? Not only does this optimize the efficiency of the device, it also takes away a decision. Now, of course, whether this can be practically done is in question. It is unclear whether the technology to do this reliably exists and there are also issues of manufacturing, cost and maintenance that play into it. But the point is that from a pure interface standpoint, it would seem to be a better solution. It meets the new requirement while retaining the one button simplicity from before.

And to take it even further, it could sense when a flush is needed, alleviating the need of the button altogether. While these types of toilets are becoming more common in public restrooms, I haven’t heard of any demand for these in the home. Here, it’s possible that automatically doing something on the user’s behalf becomes unwelcome. I can imagine in your smaller bathroom at home, you are likely to trigger it accidentally by walking by it which can be startling. In a public setting, you probably don’t care if toilets are firing off left and right like the cannons in the 1812 Overture. On one hand, it could be just an issue of implementation; maybe the technology just isn’t accurate enough. On the other hand, it’s very possible that this is a feature (when to flush) that the user wants control over. Either way, it’s an issue that the designer must grapple with.

The point of all this is that there is some room for improvement in terms of simplifying interfaces when one strives to have their program/device do more for the user. The more your program does, the less the user has to. But one can also overstep their bounds to create something that may be seen as intrusive. It’s about defining the balance between what the user does and what the machine does, with an eye towards putting more on the machine’s side.

As I mentioned in the beginning, I’m intending this to be the start of a series. Don’t expect some well-thought out arc with this; it will probably just be an occasional article here and there. While part of me wouldn’t mind writing more about toilets (I haven’t even touched upon those wacky Japanese toilets1), in the next installments, I’ll try and come up with examples more relevant to computer/human interaction.


1: warning: linked page features bare asses

7 comments » | User Interface

Passenger On Board

July 22nd, 2008 — 7:47pm

I just switched PotionStore to use Phusion Passenger. Also known as mod_rails, Passenger is an Apache module that allows you to run Rails with Apache. Unlike other Apache plugins like mod_php, your application is still run in separate processes. Previously, I had been using Apache as a proxy to a mongrel cluster. On the surface, this doesn’t sound much different but Passenger does give you a couple things:

  • It maintains the pool of Ruby processes for you. It can adjust the pool dynamically as needed in case you want to reclaim memory when it is not busy, for example. You don’t have to worry about setting up and maintaining a separate set of servers like you do with mongrel. It gets restarted with Apache and you can also trigger it to restart just the Ruby stuff. One less thing to administer and monitor.
  • Lower memory footprint if you use Enterprise Ruby (also made by Phusion). It will share resources between the Ruby processes.

Luckily, Andy Kim already played guinea pig and tried it out to make sure it worked. Many thanks to him for that (and for the whole PotionStore thing to begin with, of course).

While the setup was fairly simple, I ran into a couple odd issues. For one, the Enterprise Ruby installer seemed to screw up the permissions of some of its files. All of its .so files and a directory or Ruby file here and there were set to be only readable by the owner. Make sure to check this before deploying. Note also that it installs as a totally separate Ruby installation so run its version of gem to make sure your Ruby packages match what you had on “regular” Ruby. For those of you are running PotionStore, make sure to do a rake rails:update otherwise it’ll bomb and log a message telling you to do so.

Unfortunately, I didn’t record the memory usage beforehand so I don’t know the exact gain. Based on my recollection, it does seem like I have maybe 20M or so more than I did before (for two Ruby processes). One odd thing I’ve noticed in my graphs is that my interrupts and context switches plummeted immediately. Not sure why that is but it seems like a good thing to me.

While this doesn’t remove Rails’ lack of thread-safety problem (resulting in a separate process per request), it does at least make the deployment much, much easier and with the memory savings, a bit more scalable as you take less of a memory hit with each extra Ruby process. Especially for those of you that have not deployed yet, this will save you a bit of a headache in configuration (no proxy and mongrel setup). It’s only been up for a couple days so it may be too early to tell but so far it’s been running fine.

Comment » | Ruby on Rails, Software, System Administration, Web

Back to top