Archive for the 'default' Category

Brave New World: OpenLayers 3.0

Posted in default on July 21st, 2010 at 00:56:48

OpenLayers is approaching the ‘Brave New World’ of a major API breakage for the first time since OpenLayers 2.0 in October of 2006.

Things that have changed in OpenLayers since then:

  • Support for vector drawing.
  • The existence of ‘spherical mercator’
  • reprojection of objects
  • More than a dozen new layer types
  • The existence of TileCache, FeatureServer, and RESTful HTTP APIs for geodata
  • A tile-serving service from OpenStreetMap (also, for OpenStreetMap: 4 major API changes)
  • Over four dozen additional contributors to the source code
  • Over 2000 trac tickets
  • 10 2.x releases

Overall, is it any shock that it’s time for a pretty major change?

In order to facilitate rapid development, we’ve shifted development of OpenLayers ‘3.0’ to github; you can follow along (or fork you own) on the OpenLayers github project. To be honest, git scares the crap out of me; every time I’ve used it, I have consistently lost data as a result of not understanding the tool and using the wrong command. However, I fully realize I am a fuddy duddy who needs to get over his problems at this point. 🙂

Looking forward to seeing the future of the world where OpenLayers is even more awesome than it already is!

MetaCarta Acquired by Nokia

Posted in default on April 9th, 2010 at 09:37:05

As of April 9th, MetaCarta has been acquired by Nokia, and I am now an employee of Nokia working on local search in the Ovi services. (Woohoo!)

Enabling boto logging

Posted in default on March 12th, 2010 at 09:18:11

When using the Python ‘boto’ library for accessing Amazon Web Services, to enable logging to a file at the ‘debug’ level, simply use the logging module’s configuration:

import logging
logging.basicConfig(filename="boto.log", level=logging.DEBUG)

Place this line near the top of your script, and logging will take place to a file in your current directory called “boto.log”.

I’m sure that this is obvious for most people who use the Python logging module, but this is new code to me, and it took me a fair bit of looking to find out how to enable logging; hopefully other people find it more easily now.

How KML Succeeds and Fails as a Web Format

Posted in default on February 1st, 2010 at 10:46:28

KML is linked. It is self-descriptive, and can rely entirely on following of links to obtain more information, whether that is styles or additional data.

However, the most common way of packaging KML is as KMZ — which is sort of like packaging an HTML page inside a zip file with all of its component parts. When this is done, web-based tools — like the Javascript support in browsers — lose all access to the data other than through a server side proxy (and even that isn’t a trivial thing to achieve). Styling information and related parts are not stored as separate resources on the web. The information available in the KML has suddenly become just another application-specific format.

If this were uncommon, it wouldn’t be such a shame; it’s certainly possible to distribute data like this for use cases where it is necessary, including offline use and other use cases. However, this is not a limited situation — in fact, more than 80% of KML made available on the web tends to be primarily available as KMZ. This packaging of KML leaves much to be desired, and limits the use of such data in web-based tools.

The web already has ways to compress data — gzip-based compression is common on many web servers (a tradeoff of CPU time for bandwidth), and works fine in all KML clients I’m aware of (including Google Earth and Google Maps). This lets your data exist on the web of resources and documents, rather than in a zipped up bundle.

My interest in this matter should be obvious: I work with mapping on the web. Ideally, I work with tools that don’t require server-side code — every piece of server side code you have to build is another heavy requirements placed on the users of any software. Browsers, as a common platform across which developers can code, are a worthwhile target, and trapping your data in KMZ hides it from browsers.

Free your KML! Publish on the Web! Don’t use KMZ!

Haiti Crisis Map Effort

Posted in default on January 29th, 2010 at 17:38:31

One of the most difficult thigns to do in time of disaster is to quickly organize, marshal, and present resources. This applies across all aspects of disaster response — whether it be managing and distributing food, organizing volunteers, or setting up technical resources to assist with the relief effort.

The last is the field I obviously have the most experience/ability to help with, especially with regard to mapping. In past situations, I have put some of my map expertise to work in helping to create a resource for the disaster; the last significant case for me was in 2007, when I managed a ton of imagery made available as part of the efforts with regard to the San Diego wildfires. (That map is still available, though it’s a bit worse for the wear at this point.)

When the Haiti Crisis happened, I let it slide; I figured that someone else would step up to manage the data this time. After a while, though, I saw an increased number of imagery sources, and little coherent organization of the resources by a single party — one of the key things that made the 2007 fires map successful. As a result, and combined with some data that was being more narrowly published, I decided to set up a map. The first day I did any significant work on this was over the weekend of the 15th.

At first, the map wasn’t particularly great; it was primarily just a tool to view a bunch of satellite data that was being made available. This was primarily just a quality control check for users of OSM who needed access to the data to complete the map of Haiti. Over time, more data became available — and more importantly, the OpenStreetMap map data became a primary map for the area and rescue efforts. Suddenly, the Haiti Crisis Map — then just the “UAV map” — was being used more and more.

As more and more data became available, the old map, using a simple OpenLayers layer switcher, became unwieldy; never a user-friendly layout to begin with, adding 20 layers to an OpenLayers map with an unplanned mix of base and overlay layers leaves much to be desired.

By Wednesday, it was clear that the hodge-podge of available disk space attached to the hosting machine wasn’t going to cut it; though we started with just over 4TB available spread over 3 different drives, managing the data was becoming unwieldy at the same rate as the UI. Thankfully, by Wednesday the 20th, John Graham was able to get access to another Sun X4500 and set it up, giving us a clean 16TB drive to put new and old imagery on. (About 6 hours later, the NFS machine to which all of the current data was stored began to fail, most likely due to heavier than normal load on the machine; I spent most of that day moving data off the old drive and onto the new.)

In addition to the data migration, at this time, Aaron Racicot was able to step up and offer his help in building a GeoExt based UI for the map. His efforts turned my hack into a reasonable UI for browsing the map, and it is really only because of that that I was able to keep going.

Over the weekend, at CrisisCamp, I was able to add additional features to support Ushahidi; the code was moved into Github, haitibrowser. In the middle of this week, the code was integrated into APAN, the All Partners Access Network, to support the efforts of SOUTHCOM in maintaining a high quality Central Operating Picture of events in the area.

Over the past two weeks, data has continued to pour in, in the hundreds of gigabytes a day. This is in part thanks to the wonderful availability of imagery thanks to the generosity of the commercial providers, in addition to the data made available by organizations like NOAA, companies like Google, and more. The extremely high quality imagery produced by RIT/ImageCat/WorldBank, for example, is an example of what is possible with the hard work of people with great hardware and a great team.

Using my knowledge — gleaned from my efforts in the earlier days of OpenAerialMap — I have been able to process this data and make it available as tiles and WMS to all consumers, primarily targeted towards OpenStreetMap editors. Over two dozene layers are available via what is now called the Haiti Crisis Map, each one adding a different viewpoint of data. In addition, the map contains links to other files like KML collections from Ushahidi and Sahana, and as recently as yesterday, gained the ability to create your own layers, which you can access in the map and provide as a link to someone else, as well as export as KML.

As part of the process of making the site more readily available, it is now available from haiticrisismap.org.

The most difficult part of this is attempting to manage the large sources of data. Thankfully, the resources that I have available have allowed me to be a bit lax in my conservation of disk space, CPU time, etc. Many thanks to CalIT, SDSU/SDSC, and Telascience for organizing these resources. In addition, a lot of the ‘hard work’ in the UI has been done by Aaron Racicot of Z-Pulley. I’ve done a lot of minor work, but the major UI layout and work has been done by him.

Thankfully, I’ve had the support of a lot of good people in this effort, and a lot of good tools to use. Using GDAL + OSSIM in the background for image processing, MapServer + TileCache for mosaicing and serving, OpenLayers + GeoExt for a UI, and OSM for a base map data layer have all made this effort possible.

The haiticrisismap will continue to see improvements. It shows a lot about what a dedicated small group of people can do with an investment when properly motivated; I can honestly say that because of the resources made available through these efforts, we have saved lives. Whether it is through maps produced through OSM being loaded onto Volunteer GPS systems, or the use of the data to determine an accurate location in a map by Ushahidi volunteers, this tool has been an effective aid to the relief effort in Haiti, and will continue to do so as much as is possible in the coming days and weeks.

WSGI + Basic Auth

Posted in default on April 15th, 2009 at 10:17:05

I use the logged_in_or_basicauth snippet for a lot of my work, and had had some problems with it since I started using mod_wsgi in place of mod_python. Thanks to this post, I now know why my basic auth under mod_wsgi isn’t working: lack of WSGIPassAuthorization On in my Apache config.

Thanks to the author of that post! Also, thanks to Google, since without it, I’d never have found it.

PowerPoint, in a sentence

Posted in default on April 6th, 2009 at 09:13:30

PowerPoint is a way to make gibberish look important.

— my 12 year old daughter, Alicia

MrSID SDK Improvements

Posted in default on March 10th, 2009 at 12:37:48

For a long time, I avoided MrSID like the plague. After trying to do *anything* useful with it, I finally gave up; the requirement for old versions of gcc, non-working on 64bit, etc. really gave me a negative impression of the SDK for MrSID reading. This was especially painful when working with OpenAerialMap, since MrSID has a practical lock on the market from ortho imagery datasources. (There are exceptions to this, but they’re usually JPEG2000 data, which was even worse to work with with the tools that I use, in general.)

However, after a set of discussions yesterday, I sat down and had a bit of a discusion about it, and Frank said that MrSID building in GDAL had gotten much easier. I didn’t really believe him, but I had the DSDK handy for other reasons, and reading the build hints, it was supposed to be easy.

Thinking I was going to prove Frank wrong, I started building. I did ./configure --with-mrsid=~/Downloads/Geo_DSDK-7.0.0.2167; confirmed MrSID ‘yes’ in the output, then make.

3 minutes later, I had a gdalinfo and gdal_translate built on my Mac with MrSID support.

My historical problems with MrSID are completely irrelevant: the effort in the new SDK to support more platforms has clearly worked, and I can say that building MrSID support even on the Mac is trivial. A big thumbs up to the LizardTech folks for their effort in this regard — and to people like Frank and Michael for egging me on into learning this about the DSDK in the first place.

Code Sprint: Day 3

Posted in default on March 10th, 2009 at 09:24:38

Yesterday, I got to sit down and do some real performance testing with the MapServer folks. After rebuilding a local copy of the Boston Freemap on my laptop, I was able to share it with Paul, who ran it through Shark to find out where the performance killers are. The one thing we found was that this 5 year old MapServer ticket was negatively affecting performance on maps with many labels: The labelling code in MapServer right now, if you’re using outlines, draws each glyph 9 times in order to get a nice outline color. After determining this, it was determined that we are going to be working with the GD maintainers to add the support described in #1243 to GD to use Freetype’s internal stroking code to get the same behavior. (At the time, in Freetype *2.0.09*, there was a bug in this code; but we’re now on 2.3.8, so that bug has been long fixed. :)) This change will likely give a 20% increase on map drawing with many outlined labels, as can be seen in maps like the Boston Freemap.

After this, we sat down with MrSID and GDAL/MapServer to figure out if there were performance problems there. One thing we found was that the MapServer code drawing one-band-at-a-time means that there is a significant performance hit. In addition, some other performance enhancement techniques are being looked into at the GDAL level by Frank, thanks to the help of LizardTech developers participating in the sprint. He’s currently looking at improving the way that GDAL reads from MrSID, and was already able to achieve a 25% speed increase by simply changing the size of the internal GDAL buffer size for reading from MrSID to GeoTIFF. More documentation and experimentation is still in order, but there are some possible optimizations to investigate there for users of the library.

We then had a great dinner at Jack Astor’s.

Thanks to our sponsors for today: Bart van den Eijnden from OSGIS.nl and Michael Gerlek from LizardTech — performance improvements in MapServer and GDAL access for label drawing and MrSID are potentially big wins for many users of MapServer.

Making a Big OSM Map

Posted in default on February 12th, 2009 at 11:43:50

Mapnik is a great tool. It allows for all kinds of neat toys to happen, and the recent work in SVN has really opened up the possibility that Mapnik might be a potential solution for a rendering engine in a lot of areas that it has previously left alone. (Support for reading OGR datasources, sqlite/spatiallite plugins, etc. are all great developments that look likely to be released in the upcoming 0.6 release.)

Big OSM Map In prep for the OpenStreetMap Mapping Party this Saturday and Sunday in Somerville, I was working on printing a big map to bring with me. A friend at the Media Lab was gracious enough to help me out.

Using Mapnik, it was trivial to produce a large — 29750 x 29750 pixel — PNG image. This was designed to fill up the 49.5″ by 49.5″ printer space at 600 dpi.

The printer prefers PDF, PS or TIFF. I was able to take that PNG and convert it to a TIFF — but the resulting tiff was DEFLATE compressed, and the printer help only mentioned LZW compression. I decided to fall back to trusty GDAL to try to fix this. I found that the imagemagick-converted TIFF had one giant block — and GDAL was not pleased with this at all. (Its internal un-blocking scheme doesn’t work with compressed tiffs.)

Thanks to a suggestion from Norman Vine, I was able to use the ossim image copy program (icp) to convert this giant tiff to a tiled tiff which gdal could easily read: icp tiff_tiled -w 256 image2.out.tiff image.icp.tiff. Once I had done this, I recompressed the tiff using LZW compression with GDAL: gdal_translate -co COMPRESS=LZW image.icp.tiff image.lzw.tiff, and was able to upload the 3GB image to the printer.

All in all, took a bit more than I was expecting, but I’ve got a 4ft by 4ft map to bring to the mapping party this weekend. In the process, I also got to wanting magnification in Mapnik… which is amusing since just 24 hours before, I’d read a thread on the MapServer list and couldn’t imagine for the life of me why such a thing mattered.

Looking forward to showing the map off to local OSMers at the mapping party!