Category: Software


Positional Sound in User Interfaces

October 23rd, 2008 — 3:50pm

Video games are on the forefront of what kinds of rich interactions people can have with computers. In the past decade, there’s been a push for more and more immersive virtual environments resulting in more advanced APIs and hardware to provide things such as super-fast 3D rendering. In recent years, OS X has leveraged these advances in the predominantly 2D world of user interfaces, often in brilliant ways as seen with QuartzGL, CoreAnimation and CoreImage.

In video games, it’s quite common to exploit stereo output or even better, surround sound, to provide positional audio cues. Just as graphics can simulate a 3D space, so can sounds be placed positionally in the same space. If you, super-genetically-modified-mutant-soldier, are running around on the virtual battlefield and there is some big-bad-alien-Nazi-demon-zombie dude shooting at you from the side, you will hear it coming from that direction and react accordingly. Directional audio cues can supplement visual cues or even supplant them if visual ones cannot be shown (i.e. something requiring attention outside your field of view).

On OS X, sound is used rather sparingly in the interface, which is probably a good thing. But for those cases where it’s use is warranted, why not take advantage of technology available? Just as animation can be used to guide the user’s focus, why not sound? OS X does ship with OpenAL, which is to sound what OpenGL is to graphics, providing a way to render sounds in a 3D space.

I’ve put together a quick proof of concept app (download link near the end of the article). Move the window around the screen and click the button to make a sound. Based on the window’s position, the sound will appear to come from the different sides, which, for the most part is left/right, most sound output systems not being designed to articulate things in the up/down direction. The program itself basically maps the window position to a point in the 3D sound space. Right now, it doesn’t really use the z-axis (the axis that goes into your screen) but conceivably you can do things like make the sound appear further away based on window ordering. Try using headphones if the effect is not as apparent using speakers.

There is a significant technical issue, though. You can’t really know the actual physical dimensions and layout of a user’s screens. In addition, the position of the speakers relative to the screens is also not known. While you can get screen resolutions and relative positions of the screens, these are mostly hints at the actual layout. In my demo program, it is assumed that the screens are relatively close to each other forming one gigantic screen. It is also assumed that the speakers produce a soundstage roughly centered on the primary display (the one with the menubar). It assumes a model like this (the circle is the user and the thin slabs are the monitors, from a top-down view):

screen-setup1.png

In reality, it’s probably more likely the user would have a setup like the following:

screen-setup2.png

But who knows, it could possibly be something like this:

screen-setup3.png

The point here is that the effectiveness of this is dependent on the user’s setup. A particular idealized model would have to be chosen that hopefully works well enough for most people. While pinpoint accuracy is not really feasible, it probably isn’t required either. Human hearing is imprecise, otherwise ventriloquists would never be able to pick up a paycheck. Just an indication of left, right or center is probably enough for these purposes.

Where would this be useful? Well, this all came up yesterday when I received an IM (via Adium). I had my IM windows split up across two screens so I had to scan around a bit to find out which window had the new message. Though the window was on the screen to the left, the audio alert made me look at the main screen since the sound was centered straight ahead. It would be great to see an idea like this implemented in Adium and I’ve filed a feature request with them for their consideration. It’s ticket #11292 so you don’t go and submit a duplicate request.

It would be interesting to see more use of this in user interfaces out there. I don’t want to encourage people to add sounds to their apps if they weren’t already using them but for those that are, it’s something to consider. Overall, the effect is quite subtle but with some tweaking, it can be quite effective.

The link to download the demo program is below. Sorry, no source is provided this time. The code is a hacked together mess of stuff copied and pasted from an Apple example as I have never used OpenAL before. This can probably also be implemented in CoreAudio by adjusting the balance between the channels. If you are considering implementing something like this, email me and I’d be happy to discuss details as long as they don’t involve audio APIs since, well, I don’t know them particularly well.

Download PositionalAudioAlertTest.zip (Leopard only)

Thanks to Mike Ashe and Chris Liscio for advice on CoreAudio, which I ended up not needing as Daniel Jalkut suggested I use OpenAL instead which made things easier.

5 comments » | Downloads, OS X, Software, User Interface

Passenger On Board

July 22nd, 2008 — 7:47pm

I just switched PotionStore to use Phusion Passenger. Also known as mod_rails, Passenger is an Apache module that allows you to run Rails with Apache. Unlike other Apache plugins like mod_php, your application is still run in separate processes. Previously, I had been using Apache as a proxy to a mongrel cluster. On the surface, this doesn’t sound much different but Passenger does give you a couple things:

  • It maintains the pool of Ruby processes for you. It can adjust the pool dynamically as needed in case you want to reclaim memory when it is not busy, for example. You don’t have to worry about setting up and maintaining a separate set of servers like you do with mongrel. It gets restarted with Apache and you can also trigger it to restart just the Ruby stuff. One less thing to administer and monitor.
  • Lower memory footprint if you use Enterprise Ruby (also made by Phusion). It will share resources between the Ruby processes.

Luckily, Andy Kim already played guinea pig and tried it out to make sure it worked. Many thanks to him for that (and for the whole PotionStore thing to begin with, of course).

While the setup was fairly simple, I ran into a couple odd issues. For one, the Enterprise Ruby installer seemed to screw up the permissions of some of its files. All of its .so files and a directory or Ruby file here and there were set to be only readable by the owner. Make sure to check this before deploying. Note also that it installs as a totally separate Ruby installation so run its version of gem to make sure your Ruby packages match what you had on “regular” Ruby. For those of you are running PotionStore, make sure to do a rake rails:update otherwise it’ll bomb and log a message telling you to do so.

Unfortunately, I didn’t record the memory usage beforehand so I don’t know the exact gain. Based on my recollection, it does seem like I have maybe 20M or so more than I did before (for two Ruby processes). One odd thing I’ve noticed in my graphs is that my interrupts and context switches plummeted immediately. Not sure why that is but it seems like a good thing to me.

While this doesn’t remove Rails’ lack of thread-safety problem (resulting in a separate process per request), it does at least make the deployment much, much easier and with the memory savings, a bit more scalable as you take less of a memory hit with each extra Ruby process. Especially for those of you that have not deployed yet, this will save you a bit of a headache in configuration (no proxy and mongrel setup). It’s only been up for a couple days so it may be too early to tell but so far it’s been running fine.

Comment » | Ruby on Rails, Software, System Administration, Web

New Tool On The Block: The LLVM/Clang Static Analyzer

July 7th, 2008 — 10:30pm

Over the weekend, Gus Mueller turned me on to the LLVM/Clang static analyzer. And just in time, too, as I was polishing up my 2.2 release (which went up earlier today).

It’s an offshoot of the LLVM and Clang projects (read the respective pages on what they are if you don’t know already). The static analyzer analyzes your code and looks for problems, focusing mainly on memory allocation patterns, in this case, including Objective-C/Cocoa (retain/release/autorelease) and CF (CFRetain/CFRelease) semantics.

Take this contrived example for instance:

  id foo()
  {
      NSArray       *array = [[NSArray alloc] init];

      if ([array count] > 0)
      {
        return nil;
      }
      return [array autorelease];
  }

The example above will get you a report like this (it generates html):

checker1.png

Drilling down you get this (still in html):


checker2.png
(click to enlarge)

Here you can see it pointing out [1] where the object was allocated [2] the branch it took and [3] the point where you leaked it. Pretty neat. It tries to follow every possible branch finding paths where you may have leaked an object. It also finds what it calls “dead stores” (when you assign a value to a variable but never use it again) and missing dealloc methods

As the project page says, it is very early in development. You’ll find that it does turn up a lot of false positives, especially with the missing deallocs. False positives for memory leaks seem to occur when you release something in a different scope than where you created it. For instance, I have this chunk of Apple code that wraps CFRelease() with it’s own function that checks for NULL first. The checker complained about this every time. Nonetheless, it did turn up some real leaks for me.

Aside from reducing the number of false positives, I’d also like to see the entries grouped by source file (it’s annoying jumping around between files) as well as some way to bring up the original source file by clicking on its name in the source code view. You will also see multiple entries for the same leak when the code traverses multiple paths that end up with the same leak which can be annoying.

In any case, I recommend downloading it and giving it a try. I’m not sure how thorough it is (i.e. whether it can supplant running your program through MallocDebug/Instruments/leaks) but it makes a great additional tool to add to your arsenal. Chances are it will look at some code path that you don’t test. Oh, and a couple tips:

  • Make sure you do a clean build on your project first. The checker only runs on files that would normally be compiled (it sits in as your compiler). If your project is already built, then no files will be compiled/analyzed.
  • Use the -V option, which will pop open a browser with the analysis page when done. Normally, it sticks the files somewhere under /tmp but only shows the actual path when you start the run. Needless to say, that bit of text scrolls off pretty quickly.
  • While the tool does come up with false positives, you’ll find that sometimes it finds something subtle that you may blow off as a false positive on first glance. Make sure you understand what it is flagging, even if it ends up being wrong.

I haven’t used it with a garbage collected program so I don’t know if it uses different techniques in such a case or is just plain unnecessary. Maybe the dead store detection becomes more important. Reports from anyone using this with GC are welcome.

3 comments » | Debugging, Programming, Software, Tools

Hazel 2.2

July 7th, 2008 — 3:04pm

Yes, it’s finally out. Hazel 2.2 is what I consider the “power user” release. It adds advanced features such as pattern matching and custom tokens (basically, a more accessible form of regular expression matching and substitution, for you programmer types out there), inline scripts and ways for AppleScripts to control the rule flow. There are a bunch of smaller things tucked away in there, some of them subtle in their own ways. Make sure to read the release notes.

Thanks to all the beta testers who found all the bugs there were to find (you guys did find them all, right?) and all the users who have sent in the great comments that motivate me to keep working on this thing. Download it and give it a spin.

As for the future, I’m thinking of 3.0 though I’m not sure what will be in it yet or when it will happen. I also have been mulling over other projects so we’ll see. In the meantime, I look forward to your comments.

Comment » | Hazel, Noodlesoft, Software

On Software Bundles

June 17th, 2008 — 6:01pm

It occurred to me that it’s been about a month since I did the MacUpdate bundle in April. Now that things have settled down I figured I’d share my experiences with it.

Now, there has been some controversy concerning the bundles, boiling down to whether it is a good deal for the developers. After all this, I can’t say that the issue has been fully resolved in my mind but I’ll try to at least clarify the real issues at stake. I want this to be useful to other devs who are considering participating in bundle promotions without resorting to any demagoguery.

Why Did I Do it?

Hazel was included in the bundle with 9 other apps, the main one being Parallels. This was the anchor app that would be the big draw for most users. For me, this was a key thing. Not so much because of the potential sales as much as being associated with some larger name apps. Some people may have considered the bundle a bit boring as a lot of the apps were a bit “mainstream.” But then again, we’re talking apps that you actually see on the shelf in a store. It’s a whole level of distribution and exposure than online. What many people online don’t realize is that a vast segment of the potential user base does not scour the net for software reviews. They’ve never heard of any of your favorite Mac sites. They don’t know and don’t care about the latest Mac scandals, memes and fads. They go buy their software at an Apple store, or maybe on Amazon. Our only chance to get noticed by these users is in the oddball case when they click on that “Mac OS X Software” item in the Apple menu. The point is that for us smaller ISVs without a physical boxed product, it’s an untapped market for us. So, to be able to be associated with a couple apps from that “realm” I felt was an opportunity.

MacUpdate did run full-page spreads in MacWorld and MacLife, not something I would have been able to do myself. There is something nice about seeing your icon next to some bigger names in print. I don’t know how effective they were. The problem with print, of course, is that it’s hard to track. But again, I felt that the association would be helpful.

Now, I’m not going to give specific details concerning the money but I will say that it was a percentage-type deal. The more copies sold, the more money I got. Contrast this with a flat amount that does not vary no matter how well it sells. That said, the amount I got per bundle is way below what my product costs. So why do it? Some reasons:

  1. Volume. They will sell over an order of magnitude more copies than you will during that time period. The hope is that the volume makes up for the huge discount. The hope is more money in total.
  2. Not everyone is buying the bundle for your app. Especially if you are one of the smaller fish, you are probably piggybacking on the anchor app. One way to look at it is that you are getting part of someone else’s sale.
  3. Exposure. Yes, that nebulous thing that is bandied about. It’s hard to make any concrete claims on this one so I think it’s best to not base your decision on this factor alone. You hope your product gets more recognition in the long run but it’s hard to measure that.
  4. Userbase. By building up a large userbase, you have more people to get upgrade revenue from when the next big version comes out. Not having done a paid upgrade yet, I don’t know if this is all that it’s cracked up to be. Anyone with firsthand experience with this is invited to comment.

And of course, what’s bad about doing it?

  1. Cheapening your software. The notion here is that if you are selling your software at rock-bottom prices, people get the perception that it is not worth much. I feel this is valid but I also don’t think doing it every now and then is a big problem. I think the issue is if your product is sold at a discount often enough that people will start to expect it, waiting for the next promo to buy it. Like exposure, it is hard to quantify and so I have a hard time basing any argument on this alone.
  2. Support. Yes, taking on thousands of new customers in a short amount of time will probably result in an increased support load.
  3. Allowing promoters to exploit you. This is more of an ego/sense of justice issue. Fact is, these bundles are pulling in a lot of money. It is unclear whether the developers are getting a big enough slice of that pie. No one likes to be ripped off.

What Happened?

How did it go? Well, the bundle sold 15K copies. From what I can tell, that seems ok. Part of me expected more considering that a big name app (Parallels) was anchoring the thing. Also, it seemed like the unlocking thresholds backfired resulting in lags in the sales rate at certain points in the promotion. Personally, I’m not big on gimmicks but I leave the promotion to the promoters. I did sense a general fatigue amongst consumers with these bundles, though.

As far as the money issue goes, I got a decent chunk. If you look at it solely from a per-copy basis, then yes, it sucked. But the way I look at it is this: My revenue per month is at some amount A. I sell in a burst and rake in something like 3-4x A. Afterwards, revenue goes back to A. Now the last bit is important as it implies, to me, that I didn’t cannibalize (i.e. sold at a discount to a lot of people that would have purchased it anyways at full price) or otherwise negatively impact sales.

Now, while my sales were fine after the promotion, they didn’t shoot up (as some people may expect). This would seem to indicate the lack of effectiveness of the “exposure” element. It’s only been a month so we’ll see how it goes but at the moment, it’s like it never happened.

The question here now is what was lost. I’ll start with the more tangible cost which is support. Now, I had to deal with a support nightmare because the integration of my licensing with MacUpdate was less than ideal. I had to deal with a ton of emails with getting the program registered. Now, I’ll admit my licensing scheme was the odd-man out but I don’t get these issues with my own store. So, in this instance, the support cost to me was a bit high. That said, it was very annoying because I felt it could have been easily prevented or rectified. Outside of the registration snafu, support was not too bad. A moderate burst during the promotion and that’s it.

The other thing to consider is opportunity cost. This is the cannibalization I referred to earlier. Were you losing money to people that would have bought at full price? It’s not a question that can be definitively answered but as I mentioned, I use sales after the promotion as a gauge for this type of thing. If there’s a sales dip right afterwards, that implies to me that a lot of people that were going to buy your product anyways bought earlier to get the deal. In my case, sales were not negatively affected so I do not consider it a significant issue in this case.

Was It Worth It and Will I Do It Again?

I’d say it was worth it, but not in the way people would expect. It was worth it to me in that I got a nice check afterwards without having my sales adversely affected. As for exposure, there may be better ways to get people to know about your app without practically giving it to them. I don’t have a paid upgrade planned out yet so that was not even a consideration for me. For me to do it again, I think I would have to be offered a good deal, percentage-wise. My sense is that these promotions are more effective in the short term than long term.

Now, that doesn’t mean that it is totally ineffective in creating exposure for your app. I have seen signs that people have been turned on to Hazel who had never heard of it before. The issue here is that it has not translated into increased sales (at least so far). Maybe it will help in the long term but since it’s near impossible to quantify, it’s not a prime motivation for me when doing these types of promotions.

Did I Get Ripped Off?

To address the big controversy, no, I do not think I was ripped off though I could’ve done better. Let me start off by saying that I feel that promoters do provide a valuable service. I’ve seen some developers band together with their own bundles with less than stellar results. Fact is, not anyone can just throw their stuff up and expect to sell any large volume. There is skill and work involved in getting people’s attention. Doing it yourself is, well, just doing it yourself. If you already have the pull to get tons of people to pay attention to you, then you probably don’t need to do special promotions for your apps.

Now, I haven’t seen the books for any of these promotions plus the deals probably vary greatly depending on who does them so it may be unfair to lump them all together. But it seems to me that the promoters are getting quite a large chunk and that they can afford to cut the devs a better deal. I’ve seen arguments along the lines of “How can you criticize them? They are getting sales for developers/exposure for their software/donating money to charity/yadda yadda.” This is a logical fallacy. Yes, maybe they are doing some good things, but that doesn’t mean it justifies the bad things. There is nothing that has indicated to me that they can’t do all the good things they already do while providing a more equitable split. Again, I haven’t seen the books but my sense is that there is some leeway there.

I want to make it clear, though, that I have no complaints about my deal with MacUpdate. In the end, it was my deal to negotiate. For me, this was an experiment and knowing what I do now will help in assessing future promotions.

And of course, this is just my experience and your mileage may vary. Especially if you are just starting out, it’s possible the exposure element will be more helpful. I’m not sure how useful the exposure is for the more well-known apps but I suspect that they, having more leverage, negotiated higher percentages. In the end, you have to assess whether it fits with where you and your app are, marketwise.

32 comments » | Hazel, Noodlesoft, Software

MacSanta is back in town

December 7th, 2007 — 1:05am


It’s that time of year again. MacSanta is here, providing discounts on all sorts of Mac software for the month of December. I couldn’t get my act together last year but this year Noodlesoft is participating.

Today, Hazel is one of the apps being featured. That means you can get 20% off Hazel for today and 10% off for the rest of the month. Just saunter on over to the MacSanta site for the coupon codes to get your discount on Hazel as well as some other great software.

Comment » | Hazel, Noodlesoft, Software

On Leopard compatibility

October 22nd, 2007 — 11:31am

Hazel 2.1 has just been released. One of the main focuses of this release was Leopard compatibility but what does this really mean?

In this case, it means that, for the most part, Hazel will work on Leopard as it did on Tiger. As other devs have pointed out 1, 2, 3, we do not get the final version of Leopard any sooner than you do. Actually, unless we go into a store and pay for a copy on launch day, we will probably get it later.

The implications of this are that there could be changes that have occurred since the last prerelease and the final version that could break things and we won’t know until launch day. It’s a gamble but I’d rather have something usable in your hands the minute you upgrade to Leopard. This version addresses the known Leopard issues to date and should be ready to help organize your Stacks come August 26th.

As for the longer term roadmap with Hazel on Leopard: Hazel is not providing any special Leopard-only functionality currently. When will Hazel start using exclusive Leopard features or go fully Leopard-only? It’s hard to say. Leopard does provide some functionality that Hazel can take advantage of. But until I feel comfortable that a good number of my users have upgraded, I’ll try and support both Tiger and Leopard.

As a user, you do have the ability to influence this. When checking for updates, you have the option of sending anonymous data about your system. One of the things sent is your OS version (you can see all the data sent if you click on the “More Info…” button). Using this data, I can get a sense of Leopard adoption. If you want to be properly represented, then check the “Include anonymous profile” box in the update settings. I keep the data to myself and won’t do any bad things with it. Your participation will help guide Hazel’s future development so, if you’re not doing it already, please consider casting your vote in this manner.

So, in the end, I just want to clarify that there’s a bit of a juggling game here. I’ve tried to make sure that everything works as smoothly on Leopard as it does on Tiger. If it turns out that something changed in the final release or if I just flat out missed something, I’ll fix it. Leopard compatibility is not so much a state as it is a commitment.

2 comments » | Hazel, Noodlesoft, OS X, Software

Moving to MarsEdit

September 5th, 2007 — 11:22am

Until now, I’ve always written up my blog posts in a combination of TextEdit for the initial draft and the WordPress web UI for subsequent drafts (mostly to get the formatting right) and final posting. Needless to say, it’s been painful. For a web app, WP isn’t so bad but, in the grand scheme of things, it’s lacking. I know this will probably draw the ire of all of you who think web apps are going to take over the desktop but seriously, it’s not going to happen with the current state of the art (unless users are willing to sacrifice a good bit of usability).

I am now switching over to MarsEdit. Resizable editing area. No html tag buttons whose key equivalents conflict with my use of emacs key bindings. No waiting 10 seconds to preview or save because of a laggy internet or server. The markup macros are editable (so I don’t keep having to type in target="_blank" on every link). The UI is quick and responsive. All the niceties of a native desktop app. Yeah, I’m sure I’ll bump into problems but at least a desktop app can fix most of those. With the web app, there are fundamental problems with the paradigm that make it clumsy and slow.

I know you’re thinking, “well, if the web app sucked so bad, why did you use it?” Simply enough, momentum, or lack thereof. Also, there is my ambivalence towards blogs. I resisted investing in any blogging tools as I didn’t want to admit that I was taking this seriously. I wanted to keep it painful so I would have a reason to hate it. Though now I’m still a bit of a curmudgeon when it comes to blogging, at least I’m willing to concede it’s not worth doing things the hard way.

In any case, MarsEdit 2 is out now. If the thought of writing up a blog post makes you wince, then you should check it out. Who knows, instead of hating it you might end up liking tolerating it.

Oh, and yes, the same Daniel Jalkut who convinced me to start this blog is also responsible for MarsEdit 2. So once again, this is all his fault.

2 comments » | OS X, Software

Hazel on ScreenCastsOnline

August 27th, 2007 — 12:17pm

Noodlesoft is sponsoring the latest episode of ScreenCastsOnline. Don McAllister does a great job of guiding you through Hazel’s features. Nothing gets the point across like seeing it in action.

You can check it all out here.

Oh, and there’s a discount code in there, if you need any more enticement to watch it.

Comment » | Hazel, Noodlesoft, Software

Numbers and the Next Big Thing

August 15th, 2007 — 1:42pm

I’ve been waiting around for Numbers. Well, not Numbers specifically but for Apple to do a spreadsheet. Now that it’s out, I have to say that I’m disappointed. It’s not about features but about the base paradigm. I wanted Apple to revamp how spreadsheets are perceived and conceived. It may be a bit much to expect from Apple except that the new paradigm was already created some 20 years ago, and not only that, Steve Jobs had a hand in it.

Instead of explaining it all here, I suggest you read about Lotus Improv. Here’s a great article describing the history.

In short, Lotus came out with what is now known as the multidimensional spreadsheet. It was one of the first apps on the NeXT platform. It was also a revolutionary new way of doing spreadsheets.

It’s hard to really get a sense of how it works by reading about it. Quantrix has some great Flash presentations. I advise viewing those before reading on.

The main benefits of a multidimensional spreadsheet is that it actually knows about your model. When constructing a multidimensional spreadsheet, you are not constructing a visual structure so much as a semantic one. Those headers aren’t just for your benefit. The row and column headers are, in effect, axes in your multidimensional model. But you aren’t limited to two dimensions. You can define as many dimensions as you want and dynamically rearrange the axes as you see fit. The order of the axes (which axis is the column and which is the row) is just a part of the view and not the model itself. This also extends to charts. They are just graphical views of the same model and if your model changes, the charts can automatically update as well since they are based on the same semantic structure (i.e. the charts are not just one-offs).

The result of this is that the program makes many things natural and intuitive (it only sounds complicated). For instance, “pivot tables”. In the multidimensional model, it’s a natural extension of the paradigm (there’s no special marketing term for it) which makes pivot tables in traditional spreadsheets look like a hack. Natural language formulas also come, well, naturally. The generic headers and cell designations (A1, C5) are gone. You define the items in the headers so a cell is referenced as “Sales:1990” which makes tons more sense. Also, formulas are more based on the structure of the model and not on individual cells. This allows the formulas to be separated out so you can see (and edit) them all in one place (the formulas don’t go in the cells). Because it is a multidimensional model, extending any one dimension (i.e. adding rows or columns) will bring along any formulas with it. Again, this is hard to visualize if you haven’t seen it before so check out the Flash tours linked above.

So, where are the multidimensional spreadsheets now? Lotus did port Improv to Windows but Improv on both platforms ended up being abandoned. To fill the void, Lighthouse (the company I worked at) created a clone, Quantrix (which I worked on). As I’ve mentioned before, the Lighthouse apps were mothballed by Sun.

Since then, Pete Murray (one of the original authors of Quantrix) wrote it all over again, from scratch in Java, and has released it with his new company. He even got rights to the Quantrix name. As linked above, you can check it out at Quantrix (and thanks to Pete for allowing me to link to his demo presentations). Note that it is not priced for the casual user, being oriented more towards the enterprise customer but they do have educational pricing.

There’s also Flexisheet if you want something free, open source and/or native, though it does not seem to have been worked on in years.

• • •

iWork’s Numbers is fundamentally a 2D spreadsheet. It does some trickery with the headers to allow for some level of natural language formulas. It has some things here and there to simulate some of the aspects of a multidimensional spreadsheet but it’s still a traditional sheet underneath.

One subtle difference between the 2D and multidimensional models is that in the latter, the data model is expected to be dense. What this means you don’t really have unused cells; all cells are intended to have meaning in your model. It’s not a freeform grid but a packed model of data. For people used to sticking all sorts of random non-computational stuff into spreadsheets, this can be hard to adjust to. Basically, people are using spreadsheets not so much as computational tables but as a big piece of graph paper.

Numbers shifts this around a bit by making the tables a part of a larger freeform canvas. This is a big improvement from other traditional spreadsheets as I’ve always believed cells are for numbers. That clip art or paragraph of text you stuck in there is not a part of the model you are creating. It makes it such that the spreadsheet is used as it was intended and anything you attach to it, you put together with it and not in it. All in all, the separation of table and canvas is a welcome change.

Why wasn’t Numbers done as a multidimensional spreadsheet? Several factors come into play here. The main one is that multidimensional spreadsheets are quite different from traditional spreadsheets. If you’re an Excel user then you’d have to unlearn a lot of how you conceive of spreadsheets. In essence, it’s a hard sell to anyone that uses a traditional spreadsheet. The only market where it seems to stick is the financial market, which is not a market Apple is concerned with. It’s too bad, really, since I’ve always felt that the multidimensional model is actually more intuitive for the user who has never touched a spreadsheet. I felt a consumer-level multidimensional spreadsheet would have been the innovation the spreadsheet market needed.

Maybe in the end it was too much to expect of Apple. With innovation comes risk and it’s hard to bet on something that has already failed in the market once. Nonetheless, the innovation is there. The hard part is getting people to use it.

27 comments » | OS X, Software

Back to top