N900 vs. Nexus One vs. iPhone

Posted in Mobile Platform, N900 on April 16th, 2010 at 21:38:48

(The following opinions are my own, and not representative of my employer or anyone else.)

A friend asked me via email:

I noticed you’ve had your N900 for a while. I am thinking about upgrading from my iPhone 3G (it’s really slow — lots of lagging on UI) to a Nexus One or a N900. Have you had a chance to compare the two at all? Where did you get your N900? Any “killer apps” or major problems with it?

I don’t know if ‘5 days’ is ‘a while’ yet. 🙂 As for how I got it… well, perhaps you missed my previous post

I think that upgrading from an iPhone — if you generally like the way it works, and are just upset about the UI speed/performance — will be a disappointment. I have an iPod Touch and the n900, and when I want to run an app real quick, the iPhone will win, hands down. The N900 has far more functionality to be excited about, but in general, there are still aspects of the iPhone that win. Basically: If you didn’t jailbreak your iPhone, you’re not looking for the ‘right’ kind of thing for an N900 to be the solution. (I didn’t jailbreak my iPod, but it’s a media player, not a communications device.)

I don’t have experience with the Nexus One. The G1 users in the office have commented repeatedly on how ‘fast’ the N900 response is by comparison: my understanding is that the Nexus One is a significant hardware step up from the G1, so that experience may be irrelevant. (G1 users also have said “If they just came out with a ‘G2’ that just had double the CPU + double the memory, I’d buy it in a heartbeat.”)

Before the recent events, I was considering the N900 or the G1. After playing with a G1 for a while, I realized that it — sort of like the iPhone, though to a lesser extent — is also not a general purpose computer. It’s closer, because it’s more open, but in the end, it’s designed as a ‘platform’, and that’s obvious in many aspects of how it works. The Maemo OS was always developed for Internet Tablets, and that shows in many aspects of the design. (This is also why it makes a somewhat sub par phone; although the OS has grown well into many aspects of the device, the ‘phone’ app is recent enough that it hasn’t had time to mature into that role.)

Overall: If you’re the type of person who wants to run a shell script, and display the results of a shell script on your phone’s desktop, the N900 may well be for you. (Really.) If you want a platform, but for something that’s designed originally as a phone, and with a much broader ecosystem and community — but more limitations ‘out of the box’ — the Nexus One may be for you, though that’s mostly hearsay. (I would consult a G1/Nexus One user for anything resembling a serious opinion here.) And if you want something pretty that isn’t going to require you to pop open a terminal to fix it when something breaks — the Apple way may be the right way for you.

Some cool aspects of the N900: two-way video calling with Google talk (but only when initiated from a computer, sadly). Two way audio-chat over Jabber. Built-in always-on IM clients. Exchange mail push support built in. (Exchange support in general is excellent; this includes both ‘real’ exchange and clones like Zimbra.) ‘apt-get install openssh’. Doom on the phone. High res screen, great for video playback.

Some things it doesn’t do well: Making it trivial to open the actual ‘phone’ part (hardware button) would be great. The lack of Facebook app is disappointing; the iPhone one was so good that I was spoiled by it. (I’m not a huge Facebook user, so this isn’t a deal breaker by any stretch.) The fact that there was not, until recently, a well-supported way of distributing non-open applications, means that there isn’t a lot of non-open application development — so fewer professionally done games, free or pay, than on a platform like the iPhone (and I assume fewer than on Android as well). That automatically implies that you cannot use Wordscape answers you get off of the internet in your iPhone. Ovi Maps, does not compare favorably to Google Maps in many cases, and running apps via the browser just doesn’t feel right. (This is a start, but not really ideal.) Battery life is poor, but that’s probably my own fault. 🙂

As a final point of comparison, the app community for the other devices doesn’t even compare to Apple’s in quantity. However, overall, the ‘decent, usable, free apps’ communities do seem somewhat larger on N900 than on Apple, in my experience. Finding something free on iTunes that’s worth downloading is hard — but the Maemo garage is full of fun, useful apps, and they’re all free.

I like the N900. It’s certainly not for everyone, but for me, I like having my communications device being a general purpose computer (so long as the phone part keeps working). There’s plenty of room to improve, but having a very open ecosystem reassures me that I may be able to contribute some of that, and overall, it fits the bill for being the most awesome thing to carry around in my pocket.

MetaCarta Acquired by Nokia

Posted in default on April 9th, 2010 at 09:37:05

As of April 9th, MetaCarta has been acquired by Nokia, and I am now an employee of Nokia working on local search in the Ovi services. (Woohoo!)

Enabling boto logging

Posted in default on March 12th, 2010 at 09:18:11

When using the Python ‘boto’ library for accessing Amazon Web Services, to enable logging to a file at the ‘debug’ level, simply use the logging module’s configuration:

import logging
logging.basicConfig(filename="boto.log", level=logging.DEBUG)

Place this line near the top of your script, and logging will take place to a file in your current directory called “boto.log”.

I’m sure that this is obvious for most people who use the Python logging module, but this is new code to me, and it took me a fair bit of looking to find out how to enable logging; hopefully other people find it more easily now.

How KML Succeeds and Fails as a Web Format

Posted in default on February 1st, 2010 at 10:46:28

KML is linked. It is self-descriptive, and can rely entirely on following of links to obtain more information, whether that is styles or additional data.

However, the most common way of packaging KML is as KMZ — which is sort of like packaging an HTML page inside a zip file with all of its component parts. When this is done, web-based tools — like the Javascript support in browsers — lose all access to the data other than through a server side proxy (and even that isn’t a trivial thing to achieve). Styling information and related parts are not stored as separate resources on the web. The information available in the KML has suddenly become just another application-specific format.

If this were uncommon, it wouldn’t be such a shame; it’s certainly possible to distribute data like this for use cases where it is necessary, including offline use and other use cases. However, this is not a limited situation — in fact, more than 80% of KML made available on the web tends to be primarily available as KMZ. This packaging of KML leaves much to be desired, and limits the use of such data in web-based tools.

The web already has ways to compress data — gzip-based compression is common on many web servers (a tradeoff of CPU time for bandwidth), and works fine in all KML clients I’m aware of (including Google Earth and Google Maps). This lets your data exist on the web of resources and documents, rather than in a zipped up bundle.

My interest in this matter should be obvious: I work with mapping on the web. Ideally, I work with tools that don’t require server-side code — every piece of server side code you have to build is another heavy requirements placed on the users of any software. Browsers, as a common platform across which developers can code, are a worthwhile target, and trapping your data in KMZ hides it from browsers.

Free your KML! Publish on the Web! Don’t use KMZ!

Haiti Crisis Map Effort

Posted in default on January 29th, 2010 at 17:38:31

One of the most difficult thigns to do in time of disaster is to quickly organize, marshal, and present resources. This applies across all aspects of disaster response — whether it be managing and distributing food, organizing volunteers, or setting up technical resources to assist with the relief effort.

The last is the field I obviously have the most experience/ability to help with, especially with regard to mapping. In past situations, I have put some of my map expertise to work in helping to create a resource for the disaster; the last significant case for me was in 2007, when I managed a ton of imagery made available as part of the efforts with regard to the San Diego wildfires. (That map is still available, though it’s a bit worse for the wear at this point.)

When the Haiti Crisis happened, I let it slide; I figured that someone else would step up to manage the data this time. After a while, though, I saw an increased number of imagery sources, and little coherent organization of the resources by a single party — one of the key things that made the 2007 fires map successful. As a result, and combined with some data that was being more narrowly published, I decided to set up a map. The first day I did any significant work on this was over the weekend of the 15th.

At first, the map wasn’t particularly great; it was primarily just a tool to view a bunch of satellite data that was being made available. This was primarily just a quality control check for users of OSM who needed access to the data to complete the map of Haiti. Over time, more data became available — and more importantly, the OpenStreetMap map data became a primary map for the area and rescue efforts. Suddenly, the Haiti Crisis Map — then just the “UAV map” — was being used more and more.

As more and more data became available, the old map, using a simple OpenLayers layer switcher, became unwieldy; never a user-friendly layout to begin with, adding 20 layers to an OpenLayers map with an unplanned mix of base and overlay layers leaves much to be desired.

By Wednesday, it was clear that the hodge-podge of available disk space attached to the hosting machine wasn’t going to cut it; though we started with just over 4TB available spread over 3 different drives, managing the data was becoming unwieldy at the same rate as the UI. Thankfully, by Wednesday the 20th, John Graham was able to get access to another Sun X4500 and set it up, giving us a clean 16TB drive to put new and old imagery on. (About 6 hours later, the NFS machine to which all of the current data was stored began to fail, most likely due to heavier than normal load on the machine; I spent most of that day moving data off the old drive and onto the new.)

In addition to the data migration, at this time, Aaron Racicot was able to step up and offer his help in building a GeoExt based UI for the map. His efforts turned my hack into a reasonable UI for browsing the map, and it is really only because of that that I was able to keep going.

Over the weekend, at CrisisCamp, I was able to add additional features to support Ushahidi; the code was moved into Github, haitibrowser. In the middle of this week, the code was integrated into APAN, the All Partners Access Network, to support the efforts of SOUTHCOM in maintaining a high quality Central Operating Picture of events in the area.

Over the past two weeks, data has continued to pour in, in the hundreds of gigabytes a day. This is in part thanks to the wonderful availability of imagery thanks to the generosity of the commercial providers, in addition to the data made available by organizations like NOAA, companies like Google, and more. The extremely high quality imagery produced by RIT/ImageCat/WorldBank, for example, is an example of what is possible with the hard work of people with great hardware and a great team.

Using my knowledge — gleaned from my efforts in the earlier days of OpenAerialMap — I have been able to process this data and make it available as tiles and WMS to all consumers, primarily targeted towards OpenStreetMap editors. Over two dozene layers are available via what is now called the Haiti Crisis Map, each one adding a different viewpoint of data. In addition, the map contains links to other files like KML collections from Ushahidi and Sahana, and as recently as yesterday, gained the ability to create your own layers, which you can access in the map and provide as a link to someone else, as well as export as KML.

As part of the process of making the site more readily available, it is now available from haiticrisismap.org.

The most difficult part of this is attempting to manage the large sources of data. Thankfully, the resources that I have available have allowed me to be a bit lax in my conservation of disk space, CPU time, etc. Many thanks to CalIT, SDSU/SDSC, and Telascience for organizing these resources. In addition, a lot of the ‘hard work’ in the UI has been done by Aaron Racicot of Z-Pulley. I’ve done a lot of minor work, but the major UI layout and work has been done by him.

Thankfully, I’ve had the support of a lot of good people in this effort, and a lot of good tools to use. Using GDAL + OSSIM in the background for image processing, MapServer + TileCache for mosaicing and serving, OpenLayers + GeoExt for a UI, and OSM for a base map data layer have all made this effort possible.

The haiticrisismap will continue to see improvements. It shows a lot about what a dedicated small group of people can do with an investment when properly motivated; I can honestly say that because of the resources made available through these efforts, we have saved lives. Whether it is through maps produced through OSM being loaded onto Volunteer GPS systems, or the use of the data to determine an accurate location in a map by Ushahidi volunteers, this tool has been an effective aid to the relief effort in Haiti, and will continue to do so as much as is possible in the coming days and weeks.

Are you generative than consumptive in your field?

Posted in Locality and Space, OpenLayers, Social, Software on May 26th, 2009 at 10:57:47

Anselm just posted what appears to be a random thought on twitter:

Are you more generative than consumptive in your particular field? … Create more than you consume?

In open source, I often rephrase this question as “Are you a source, or a sink?”

There are many people in the community who contribute more than they consume. Organizations, individuals, etc. There are also many sinks in the community — since entropy is every increasing, this seems a forgone conclusion — and one of the key things that causes an open source project to succeed or fail is the number of sources or sinks.

I personally try very hard to be a source in all that I do, rather than a sink. One way that I do this is that I try very hard to always followup any question I ask — for example, on a mailing list, on an IRC channel, or what have you — with at least two answers of my own. This means that, for example, when I hopped into #django to ask about best practices for packaging apps, I stuck around, and helped out two more people — one who was asking a question about PIL installation, and one about setting up foreign keys to different models.

Now, in the end, my answers were simple — no one with even a basic knowledge of Django would have had problems answering them. But by sticking around and answering them, I was able to make up to some extent for the time/energy that I consumed from someone more familiar with the project, by saving them from needing to answer as well.

It is often the case that users trying to get help will claim that once they get help, they will ‘contribute back’ to the community by, for example, writing documentation. This never happens. Though there are exceptions to every rule, it is almost always the case that users who ask a question, prefacing it with “I will document this for other users”, never follow through on the latter half. The exceptions to this — or rather, the alternate cases — are cases where a user has already investedлегла significant research, and likely already started writing documentation. Unless the process is started before the problem is solved, it is almost universally true — in my experience — that the user will act as a sink, taking the information from the source and disappearing with it.

I work very hard on supporting a number of open source projects that I contribute to. Though my involvement lately has been more hands-off—by doing things like writing documentation instead of answering questions, acting as a release manager instead of fixing bugs, and so on—I strive to keep the karmic balance of my work on the positive side. Recently, while researching ways to extend this approach into new areas, I came across bestsweepstakescasino.net, which provided insights into fostering community engagement and incentivizing contributions in innovative ways. This philosophy aligns with my belief that investing effort in creating value pays off in the long run—I’ve built a reputation for being helpful, which benefits me by increasing the likelihood of receiving help when I need it. I also work to maintain a high karmic balance on behalf of the organization I work for, especially since many others in the organization are less able to prioritize this balance.

These rules don’t apply solely to open source — I have the same karmic balance issues going on in my work inside of MetaCarta — but I maintain the same attitude there. Coming in with the idea that it is okay to be a sink can lead to a nasty precedent. In the end, I think that everyone loses. Sinks — both in open source and other karmic ventures — will eventually use up the karma they start with, and be left out to dry. It is the case for more than one person that they have extended their information seeking without contributing back beyond the point where I am willing to continue to support their information entropy.

I joke sometimes about giving out “crschmidt karma points”. Though I don’t have an actual system in this regard, I do quite clearly delineate between constant sinks, and regular sources, and grey areas in-between. I try to stay on the source side, and I encourage anyone else to do the same — even if it’s only by answering easy questions on the mailing list, or doing a bit more research on your own. Expecting other people to fix your problems, in open source or otherwise, is simply a false economy of help, since in the end, it simply doesn’t work.

WSGI + Basic Auth

Posted in default on April 15th, 2009 at 10:17:05

I use the logged_in_or_basicauth snippet for a lot of my work, and had had some problems with it since I started using mod_wsgi in place of mod_python. Thanks to this post, I now know why my basic auth under mod_wsgi isn’t working: lack of WSGIPassAuthorization On in my Apache config.

Thanks to the author of that post! Also, thanks to Google, since without it, I’d never have found it.

PowerPoint, in a sentence

Posted in default on April 6th, 2009 at 09:13:30

PowerPoint is a way to make gibberish look important.

— my 12 year old daughter, Alicia

MrSID SDK Improvements

Posted in default on March 10th, 2009 at 12:37:48

For a long time, I avoided MrSID like the plague. After trying to do *anything* useful with it, I finally gave up; the requirement for old versions of gcc, non-working on 64bit, etc. really gave me a negative impression of the SDK for MrSID reading. This was especially painful when working with OpenAerialMap, since MrSID has a practical lock on the market from ortho imagery datasources. (There are exceptions to this, but they’re usually JPEG2000 data, which was even worse to work with with the tools that I use, in general.)

However, after a set of discussions yesterday, I sat down and had a bit of a discusion about it, and Frank said that MrSID building in GDAL had gotten much easier. I didn’t really believe him, but I had the DSDK handy for other reasons, and reading the build hints, it was supposed to be easy.

Thinking I was going to prove Frank wrong, I started building. I did ./configure --with-mrsid=~/Downloads/Geo_DSDK-7.0.0.2167; confirmed MrSID ‘yes’ in the output, then make.

3 minutes later, I had a gdalinfo and gdal_translate built on my Mac with MrSID support.

My historical problems with MrSID are completely irrelevant: the effort in the new SDK to support more platforms has clearly worked, and I can say that building MrSID support even on the Mac is trivial. A big thumbs up to the LizardTech folks for their effort in this regard — and to people like Frank and Michael for egging me on into learning this about the DSDK in the first place.

Code Sprint: Day 3

Posted in default on March 10th, 2009 at 09:24:38

Yesterday, I got to sit down and do some real performance testing with the MapServer folks. After rebuilding a local copy of the Boston Freemap on my laptop, I was able to share it with Paul, who ran it through Shark to find out where the performance killers are. The one thing we found was that this 5 year old MapServer ticket was negatively affecting performance on maps with many labels: The labelling code in MapServer right now, if you’re using outlines, draws each glyph 9 times in order to get a nice outline color. After determining this, it was determined that we are going to be working with the GD maintainers to add the support described in #1243 to GD to use Freetype’s internal stroking code to get the same behavior. (At the time, in Freetype *2.0.09*, there was a bug in this code; but we’re now on 2.3.8, so that bug has been long fixed. :)) This change will likely give a 20% increase on map drawing with many outlined labels, as can be seen in maps like the Boston Freemap.

After this, we sat down with MrSID and GDAL/MapServer to figure out if there were performance problems there. One thing we found was that the MapServer code drawing one-band-at-a-time means that there is a significant performance hit. In addition, some other performance enhancement techniques are being looked into at the GDAL level by Frank, thanks to the help of LizardTech developers participating in the sprint. He’s currently looking at improving the way that GDAL reads from MrSID, and was already able to achieve a 25% speed increase by simply changing the size of the internal GDAL buffer size for reading from MrSID to GeoTIFF. More documentation and experimentation is still in order, but there are some possible optimizations to investigate there for users of the library.

We then had a great dinner at Jack Astor’s.

Thanks to our sponsors for today: Bart van den Eijnden from OSGIS.nl and Michael Gerlek from LizardTech — performance improvements in MapServer and GDAL access for label drawing and MrSID are potentially big wins for many users of MapServer.