Archive for the 'Locality and Space' Category

OpenLayers: Still popular on YouTube, years later.

Posted in OpenLayers, YouTube on March 22nd, 2014 at 06:32:32

In 2007, I posted a video to YouTube; it was just a 5 minute, silent how-to video showing how to load data that you had in a shapefile, open it in QGIS, style it, export it to a mapfile, and load it into OpenLayers. I’ve given pretty much this exact presentation to groups around the world: from Cape Town, South Africa, to Osaka, Japan, but at the time it was just a quick demo I put together, related to a wiki page: Mapping Your Data, in the OpenLayers wiki.

I hadn’t paid attention to it in forever — I uploaded it to YouTube back in 2007, and I haven’t really thought about it since. So as I’m using YouTube a bit more recently, I actually looked at my analytics… and realized that this video still gets *400 views every month*, with an average of two minutes watched per view.

This means that 20 minutes gets wasted watching this video every day (on average); that is more time than I spend on YouTube in an average week. (Given my new employer, I can imagine that changing somewhat in the near future.)

Amusingly enough, for a long time this wasn’t my most popular video; the OpenLayers video is a bit long, and with no sound, can be a bit of a drag. (The pace of it, even 7 years on, still impresses me though; I spent a whole weekend just going through the motions to get the flow down. It really does work nicely.) My most popular video for a long time was an N95 Accelerometer Demo:

This demo showed the use of a Python script to use the accelerometer and simple 2d graphics to move a ball around the screen. (The Symbian Python APIs for interacting with 2d graphics were terrific, and I wish modern phones had something similarly easy.) In the week after that video launched, it had *1500* views; but it was a flash in the pan, and hasn’t maintained its popularity, getting only 2 watches in the last 30 days. (This video was popular enough that I was invited to join the YouTube monetization program, unlike the OpenLayers video, which was never ‘viral’ enough to get there.)

I’ve never been much of a video guy before — another thing I can see changing — but I’m now putting together some of the videos from my quadcopter flights. Last night, I published my bloopers from the first couple days of flying:

But I guess I can never expect, based on my current views, that anything I do on YouTube will be more popular than a silent video I published about OpenLayers back in 2007.

I guess this really just goes back to: OpenLayers was a unique experience, and is probably the most impressive thing I will actually work on for the benefit of the internet at large… ever.

olhttp and DjangoZoom at FOSS4G 2011

Posted in Django, FOSS4G 2011, OpenLayers on September 15th, 2011 at 08:07:15

иконописSo, on Sunday, we released the new version of OpenLayers, with updated mobile support. This included the ability to do dragging and panning and even editing of features on mobile devices, including Android and iOS.

Last week, I finished up a quick project, called olhttp. olhttp is a demo of how to use the OpenLayers HTTP protocol to create a simple, straightforward UI for editing features — but one that is easily customizable.

This afternoon, I decided to make an attempt to put these two things together — specifically, to make it possible to demo feature editing on a mobile device like my new tablet, saving the features and then being able to display them on my Nexus S or a laptop. So, I wanted a quick and easy deployment plan for a GeoDjango app — after my experience with DotCloud, I’ve come to the realization that hosting my own shit is for chumps (unless I really need some specific high-level SLAs for uptime, which I almost never do for personal projects).

Luckily, I happened to be at a table full of FOSS4G Django Hackers, so they were able to suggest DjangoZoom, which recently added support for GeoDjango/PostGIS. I was able to apply for an invite, and later that afternoon, I had one, and was able to start playing with it.

Now, at the time, I was in a talk, so the only thing I had with me was my tablet; I figured I’d set my account up, and see how it went. Turned out, the answer was “really well”.

The way DjangoZoom works by default is that you give it a Github repository URL, and it will automatically fetch the code for you, and set up a Django database, appserver, etc., deploying your application’s code to the DjangoZoom servers. What does this mean in practice? Well, for me, it meant that I was able to continue with the setup process for DjangoZoom — all the way up through actually getting a working application deployed, without even switching from the tablet to the laptop. I provided my Django app to the platform, and it worked right out of the box.

After getting my application deployed in just minutes, I then moved onto modifying my app to specifically target touch devices. This included modifying the UI to be more touch friendly — larger editing icons, for example, since the defaults are very difficult to hit on a tablet or phone screen, with 200 ppi. This work (in a new github repo for the time being, olhttp-django) complete, I now have a simple, easy to use tablet editing UI. It works on phones like iOS and Android, it works in all browsers on the desktop, and it provides an easy to use data-input mechanism — and I never had to touch an Apache config.

That’s what I call “success”.

VSI Curl Support

Posted in GDAL/OGR, Locality and Space, Software on October 4th, 2010 at 06:14:47

In a conversation at FOSS4G, Schuyler and I sat down with Frank Warmardam to chat about the possibility of extending GDAL to be able to work more cleanly when talking to files over HTTP. After some brief consideration, he agreed to do some initial work on getting a libcurl-based VSIL wrapper built.

VSIL is an API inside of GDAL that essentially allows you to treat files which are accessed through different streaming protocols available as if they were normal files; it is used for support for accessing content inside zipped containers, and other similar data access patterns.

GDAL’s blocking strategy — that is, the knowledge of how to read sub-blocks of files in order to obtain the information it needs, rather than needing to read a larger part of the file — is designed to limit the amount of disk I/O that’s needed for rendering large rasters. A properly set up raster can limit the amount of data that needs to be read significantly, helping improve tile rendering time significantly. This type of access would also allow you to fetch metadata about remote images without the need to access an entire (possibly large) image.

As a result, we thought it might be possible to use HTTP-based access to images using this mechanism; for metadata access and other similar information over the web. Frank thought it was a reasonable idea, though he was concerned about performance. Upon returning from FOSS4G, Frank mentioned in #gdal that he was planning on writing such a thing, and Even popped up mentioning ‘Oh, right, I already wrote that, I just had it sitting around.’

When Schuyler dropped by yesterday, he mentioned that he hadn’t heard anything from Frank on the topic, but I knew that I’d seen something go by in SVN, and said so. We looked it up and found that the support had been checked into trunk, and we both sat down and built a version of GDAL locally with curl support — and were happy to find out that the /vsicurl/ driver works great!

Using the Range: header to do partial downloads, and parsing some directory listing style pages for READDIR support to find out what files are available, the libcurl VSIL support means that I can easily get the metadata about a 1.2GB TIF file with only 64kb of data transferred; with a properly overlaid file, I can pull a 200 by 200 overview of the same file while using only 800kb of data transfer.

People sometimes talk about “RESTful” services on the web, and I’ll admit that there’s a lot to that that I don’t really understand. I’ll admit that the tiff format is not designed to have HTTP ‘links’ to each pixel — but I think the fact that by fetching a small set of header information, GDAL is then able to find out where the metadata is, and request only that data, saving (in this case) more than a gigabyte of network bandwidth… that’s pretty frickin’ cool.

Many thanks to EvenR for his initial work on this, and to Frank for helping get it checked into GDAL.

I’ll leave with the following demonstration — showing GDAL’s ability to grab an overview of a 22000px, 1.2GB tiff file in only 12 seconds over the internet:

$ time ./apps/gdal_translate -outsize 200 200  /vsicurl/http://haiticrisismap.org/data/processed/google/21/ov/22000px.tif 200.tif
Input file size is 22586, 10000
0...10...20...30...40...50...60...70...80...90...100 - done.

real	0m11.992s
user	0m0.052s
sys	0m0.128s

(Oh, and what does `time` say if you run it on localhost? From the HaitiCrisisMap server:

real	0m0.671s
user	0m0.260s
sys	0m0.048s

)

Of course, none of this compares as a real performance test, but to give an example of the comparison in performance for a single simple operation:

$ time ./apps/gdal_translate -outsize 2000 2000 
     /vsicurl/http://haiticrisismap.org/data/processed/google/21/ov/22000px.tif 2000.tif
Input file size is 22586, 10000
0...10...20...30...40...50...60...70...80...90...100 - done.

real	0m1.851s
user	0m0.556s
sys	0m0.272s

$ time ./apps/gdal_translate -outsize 2000 2000 
    /geo/haiti/data/processed/google/21/ov/22000px.tif 2000.tif
Input file size is 22586, 10000
0...10...20...30...40...50...60...70...80...90...100 - done.

real	0m1.452s
user	0m0.508s
sys	0m0.124s

That’s right, in this particular case, the difference between doing it via HTTP and doing it via the local filesystem is only .4s — less than 30% overhead, which is (in my personal opinion) pretty nice.

Sometimes, I love technology.

OSGeo Mission: Collaborative Development

Posted in FOSS4G 2010, Locality and Space, OSGeo on September 13th, 2010 at 05:54:21

At the OSGeo Board meeting in Barcelona, we discussed many things, but one of the topics of special interest to me is the simple question: “What is OSGeo all about?”

The first place to look for that, of course, is the website; although many parts of the website address many specific problems, there is one place that we define what OSGeo is really about: the mission statement. It says that the Mission of OSGeo is:

To support the collaborative development of open source geospatial software, and promote its widespread use.

When we started our board discussions, there was one word missing there: the “collaborative” is something we voted to add, something I was very supportive of. There are many organizations (Sencha being a significant example in the space I work in) where organizations are developing Open Source software that is not openly developed. OSGeo is not about that: instead, it’s about encouraging exactly the opposite.

One of the most important things that OSGeo incubation does is ensure that a project is collaboratively developed. We seek for projects with a reasonably broad base of support, in terms of both developers and users. We seek to encourage community; our default project setup uses open, widely available collaborative development tools.

We host dozens of mailing lists. We have a single login account that gives access to the bug trackers for more than a dozen projects. We seek the broadest interaction between projects possible in order to foster a collaborative environment.

OSGeo is a really interesting case for this type of foundation work, because we have such a broad collection of projects despite the narrow scope. Databases. Web servers — both Map and other GIS related. Clients. Data manipulation libraries. Metadata catalogs. All of them interact at almost every stage of the process. Interoperability of this software is a key way to make the Open Source geospatial world more successful, and something we do relatively well.

So, if anyone ever asks you: What does OSGeo do? The answer, at its heart, is: “Support the collaborative development of open source geospatial software.” And I’m pretty thrilled with both the goal, and the success so far.

New Mailing List: tiling; Feedback On WMTS

Posted in FOSS4G 2010, OSGeo, TileCache on September 9th, 2010 at 03:07:15

In the past, for tiling, we discussed tiling on an EOGEO list. In the meantime, OSGeo has grown up, EOGEO has moved on, and it seems that there isn’t a very good home for future tiling discussions.

As a result, I have added a tiling list to the OSGeo mailing list server.

Tiling List @ OSGeo

Projects that I hope to see people joining from: TileCache, Tirex, Mapproxy, GWC, others, etc.

This list will be discussing general tiling ideas — how to cache tiles, how to manage caches, how to work with limited caches, where to put your tiles, things like S3, etc. etc. If you are at all interested in tiling — not at the level of a specific application, but in general — please join the list.

Additionally, if you are interested in discussing providing feedback to the OGC regarding the WMTS spec — especially if you are an implementer, but also if you are a user — I would encourage you to join the standards list at OSGeo:

http://lists.osgeo.org/mailman/listinfo/standards

Several people have expressed interest in coordinating a response to the OGC regarding the spec, and we would like to work together on this list to coordinate.

Are you generative than consumptive in your field?

Posted in Locality and Space, OpenLayers, Social, Software on May 26th, 2009 at 10:57:47

Anselm just posted what appears to be a random thought on twitter:

Are you more generative than consumptive in your particular field? … Create more than you consume?

In open source, I often rephrase this question as “Are you a source, or a sink?”

There are many people in the community who contribute more than they consume. Organizations, individuals, etc. There are also many sinks in the community — since entropy is every increasing, this seems a forgone conclusion — and one of the key things that causes an open source project to succeed or fail is the number of sources or sinks.

I personally try very hard to be a source in all that I do, rather than a sink. One way that I do this is that I try very hard to always followup any question I ask — for example, on a mailing list, on an IRC channel, or what have you — with at least two answers of my own. This means that, for example, when I hopped into #django to ask about best practices for packaging apps, I stuck around, and helped out two more people — one who was asking a question about PIL installation, and one about setting up foreign keys to different models.

Now, in the end, my answers were simple — no one with even a basic knowledge of Django would have had problems answering them. But by sticking around and answering them, I was able to make up to some extent for the time/energy that I consumed from someone more familiar with the project, by saving them from needing to answer as well.

It is often the case that users trying to get help will claim that once they get help, they will ‘contribute back’ to the community by, for example, writing documentation. This never happens. Though there are exceptions to every rule, it is almost always the case that users who ask a question, prefacing it with “I will document this for other users”, never follow through on the latter half. The exceptions to this — or rather, the alternate cases — are cases where a user has already investedлегла significant research, and likely already started writing documentation. Unless the process is started before the problem is solved, it is almost universally true — in my experience — that the user will act as a sink, taking the information from the source and disappearing with it.

I work very hard on supporting a number of open source projects that I work on. Though my involvement lately has been more hands off — by doing things like writing documentation instead of answering questions, acting as a release manager instead of fixing bugs, and so on — I work very hard to keep the karmic balance of my work on the positive side. I believe that this pays off in the long run — I have somewhat of a reputation of being helpful, which is beneficial to me since it means I’m more likely to receive help when I need it. I also work to keep karmic balance high on the part of the organization I work for, since many of the other people in the organization are less able to keep karmic balance high.

These rules don’t apply solely to open source — I have the same karmic balance issues going on in my work inside of MetaCarta — but I maintain the same attitude there. Coming in with the idea that it is okay to be a sink can lead to a nasty precedent. In the end, I think that everyone loses. Sinks — both in open source and other karmic ventures — will eventually use up the karma they start with, and be left out to dry. It is the case for more than one person that they have extended their information seeking without contributing back beyond the point where I am willing to continue to support their information entropy.

I joke sometimes about giving out “crschmidt karma points”. Though I don’t have an actual system in this regard, I do quite clearly delineate between constant sinks, and regular sources, and grey areas in-between. I try to stay on the source side, and I encourage anyone else to do the same — even if it’s only by answering easy questions on the mailing list, or doing a bit more research on your own. Expecting other people to fix your problems, in open source or otherwise, is simply a false economy of help, since in the end, it simply doesn’t work.

Toronto Code Sprint: Day 2

Posted in Locality and Space, Mapserver, OSGeo, PostGIS, Toronto Code Sprint on March 8th, 2009 at 22:44:32

Day 2 of the code sprint seemed to be much more productive. With much of the planning done yesterday, today groups were able to sit down and get to work.

Today, I accomplished two significant tasks:

  • Setting up the new OSGeo Gallery, which is set to act as a repository for demos of OSGeo software users in the same way that the OpenLayers Gallery already does for OpenLayers. We’ve even added the first example.
  • TMS Minidriver support for the GDAL WMS Driver: Sitting down and hacking out a way to access OSM tiles as a GDAL datasource, Schuyler and I built something which is reasonably simple/small — an 18k patch including examples and docs — but allows for a significant change in the ability to read tiles from existing tileset datasources on the web.

Other things happening at the sprint today were more WKT Raster discussions, liblas hacking, and single-pass MapServer discussions, as well as some profiling of MapServer performance with help from Paul and Shark. Thanks to the participation of the LizardTech folks, I think there will also be some performance testing done with MrSID rendering within MapServer, and there was — as always — more discussion of the “proj strings are expensive to look up!” discussion.

Other than that, it was a quiet day; lots of work getting done, but not much excitement in the ranks.

We then had a great dinner at Baton Rouge, and made it home.

This evening, I’ve been doing a bit more hacking, opening a GDAL Trac ticket for an issue Schuyler bumped into with the sqlite driver, and pondering the plan for OpenLayers tomorrow.

As before, a special thanks to the conference sponsors for today: Coordinate Solutions via David Lowther, and the lovely folks at SJ Geophysics Ltd.. Thanks for helping make this thing happen! I can guarantee that neither of those GDAL tickets would have happened without this time.

Toronto Code Sprint: Day 1

Posted in Mapserver, OSGeo, PostGIS, Toronto Code Sprint on March 8th, 2009 at 07:55:43

I’m here at the OSGeo Code Sprint in Toronto, where more than 20 OSGeo hackers have gathered to work on all things OSGeo — or at least MapServer, GDAL/OGR, and PostGIS.

For those who might not know, a code sprint is an event designed to gather a number of people working on the same software together with the intention of working together to get a large amount of development work done quickly. In this case, the sprint is a meeting of the “C tribe”: Developers working on the C-based stack in OSGeo.

After some discussion yesterday, there ended up being approximately 3 groups at the sprint:

  • People targeting MapServer development
  • PostGIS developers
  • liblas developers

(As usual, I’m a floater, but primarily concentrating on OpenLayers; Schuyler will be joining me in this pursuit, and I’ve got another hacker coming Monday and Tuesday to sprint with us.)

The MapServer group was the most lively discussion group (and is also the largest). It sounded like there were three significant development discussions that were taking place: XML Mapfiles, integration of pluggable rendering backends, and performance enhancements, as well as work on documentation.

After a long discussion on the benefits/merits of XML mapfiles, it came down to there being one main target use case for the XML mapfile is encouraging the creation and use of more editing clients. With a format that can be easily round-tripped between client and server, you might see more editors able to really speak the same language. In order to test this hypothesis, a standard XSLT transform will be created and documented, with a tool to do the conversion; this will allow MapServer to test out the development before integrating XML mapfile support into the library itself.

I didn’t listen as closely to the pluggable renderers discussion, but I am aware that there’s a desire to improve support and reduce code duplication of various sorts, and the primary author of the AGG rendering support is here and participating in the sprint. Recently, there has been a proposal to the list to add OpenGL based rendering support to MapServer, so this is a step in that direction.

The PostGIS group was excited to have so many people in the same place at the same time, and I think came close to skipping lunch in order to get more time working together. In the end, they did go, but it seemed to be a highly productive meeting. Among some of their discussions was a small amount of discusssion on the WKTRaster project which is currently ongoing, I believe.

After our first day of coding, we headed to a Toronto Marlies hockey game. This was, for many of us, the first professional hockey we’d ever seen. (The Marlies are the equivilant of AAA baseball; one step below the major leagues.) The Canadians in the audience, especially Jeff McKenna, who played professional hockey for a time, helped keep the rest of us informed. The Marlies lost 6-1, sadly, but as a non-Canadian, I had to root a bit for the Hershey team. (Two fights did break out; pictures forthcoming.)

We finished up with a great dinner at East Side Mario’s.

A special thanks to our two sponsors for the day, Rich Greenwood of Greenwood Map and Steve Lehr from QPUBLIC! Our sprint was in a great place, very productive, and had great events, thanks to the support of these great people.

Looking forward to another great day.

Geodata Cost Recovery: Eaton County

Posted in Locality and Space on February 25th, 2009 at 08:37:47

I was pointed out to Eaton County’s GIS Data Prices last night, and all I can say is how disappointed I am that people who deal with coast to coast vehicle shipping can still feel that this is an appropriate way to fleece their taxpayers. The data is collected, reproduction costs for the data are probably in the realm of a couple hundred bucks — less, if you just distribute them online. (Clearly, you already have a website.) Yet you charge twelve *thousand* dollars for copies — and even after that, you’re still limited in what you can do.

This kind of thing is just a damn shame. Taxpayers should insist that this data is made available at reasonable reproduction costs; the policies of GIS departments to make money off of these things is simply silly so long as they are collected with taxpayer dollars.

(If the GIS department does not receive state funding, then I suppose this type of cost recovery makes sense — in the same way that Sanborn or any other commercial entity would charge for it. However, I doubt that the primary client of such data isn’t the state itself, in which case it’s still taxpayer dollars covering the costs somewhere…)

Yahoo! Maps APIs, aka ‘grr, argh!’

Posted in Locality and Space, OpenStreetMap on February 16th, 2009 at 15:14:00

I have a love/hate relationship with Yahoo!’s mapping API. It’s lovely that Yahoo! believes, unlike Google and other mapping providers, that their satellite data is a suitable base layer to use for derivation of vectors. This openness really is good to see — they win big points from me in this regard. (Google, on the other hand, is happy to have you give them data against their satellite imagery, but letting you actually have it back is against the Terms of Service.)

However, the Yahoo! Maps AJAX API has never gotten much love. I think that a preference for flash has always existed in the Yahoo! world; iirc, their original API was Flash.

However, I realized today that this tendancy to leave the AJAX API in the dust has resulted in something that seriously affects me: The Yahoo! maps AJAX API uses a different set of tiles, which has two fewer zoom levels available in it:

AJAX Maps: most zoomed in Flash Maps: Most zoomed in

For the new OpenStreetMap editor I’m working on, this is a *serious* difference: although the information actually available in these tiles isn’t *that* much higher, it allows the user to extract more information by getting in a bit more, and to be more precise in placement of objects when using Yahoo! as a basemap.

Although it would be relatively easy to rip the tiles out, and create an OpenLayers Layer class that loaded them directly, this violates the Yahoo! Terms of Use. This is understandable, but unfortunate, because it means I can’t solve the problem with my own code.

What I would really love to see is more providers creating a more friendly way of accessing their tiles. I understand the need for counting of accesses, and the need for copyright notifications. If an API were published, that allowed you to:

  • Fetch a copyright notice for a given area, possibly also returning a temporary token to use
  • Following that, fetch tiles to fill that area
  • Require users to copyright notice in such a way as to make Yahoo! and their providers happy

This would allow for building a layer into OpenLayers which complied with this, without depending on Yahoo! to write a Javascript layer that did these things for me.

Now, it’s understandable that this doesn’t happen — having the client out of control of Yahoo! means that they can’t *enforce* that the copyright is displayed prominently, as they are able to (to some extent) with their API. However, I think that this type of API would allow more innovation, and possibly even a *more* prominent placement for Yahoo’s copyrights and notices. For example, in many mapping apps, the bottom inch of the map is not seen much by the users. If there was an API to get text to display, then an application could display the text in a more prominent location, rather than burying it under many markers or other pieces of text that might overlap it.

In the short term, all I really wish was that the AJAX API used the apparently-newer set of satellite tiles that the Flash API appears to have access to. I think the fact that this isn’t currently possible leads to an alternative access pattern for tiles, one which may make more sense in the long run, where tiles can be used by an application without necessarily running in the constrained Javascript API that these providers have the ability to write. And of course, if you want to provide your users with a ‘default’ API to use, you can always use OpenLayers, and extend it to include your own extensions…