Indoor Mapping

Posted in default on April 21st, 2013 at 15:42:44

I’ll admit it: I’m obsessed with map data. Not the maps themselves — not how they look, or anything else about them — but the data, the bits that make up the way we find our way in the world.

This wasn’t always the case. In the past, I didn’t care much about the data — I just wanted pretty pictures so I could show things. But as I’ve changed from being a map maker to working on the search side of ovi Nokia HERE Maps, I’ve moved away from caring how the map looks, to caring what is underneath.

Every time I walk into a place, my brain immediately tries to think about how I’d map it. Is the Central Cafe in the middle of Union Station a single entity, or is the upstairs eating area a second floor — even though the structure is standalone? What is the right way to represent the curving staircases up to the second level of shops in the main station?

Every time I walk into a complex place like this, I just want to spend a week mapping out every detail. Where are the stores? What is in them? Can we attach a frontage photo of each? How would you represent the Godiva chocolate — do you mark the path through it as a public hallway, since it’s used that way, or as part of the store?

I want every complex building in the world to have a fully annotated set of data about it, not so I can look at it, but so that I can be routed along it.

It’s not that the technology and approach to do this don’t exist: the data behind things like Bing’s Venue Maps (http://binged.it/ZGAGfT), sold through Nokia as Destination Maps, are typically completely covered for routing. Public spaces are demarcated. Entrances are noted. Every piece of info that you could want is there. But these maps exist for so few places, and they’re so poorly integrated with the rest of the mapping experience.

I’m tired of wandering around for 20 minutes looking for the luggage lockers. Of not knowing where the restrooms are. Of being trapped in the maze that is the International Spy Museum, looking for the way out. (By the way: International Spy Museum — awesome place, definitely worth the price of admission. Great mix of gadgets and pop culture, artifacts and pop culture, plus a huge exhibit on James Bond.) I love these places, but I hate not knowing where I am!

“More than 4230 venues”, says Bing. Well, great, but it appears that you have less than 10 places in Washington DC. No Union Station, no Air and Space Museum, no American History Museum. No National Archives, no White House.

These are the easy places. Every one of these places has a map — most of them produced by the Smithsonian, and if they’re not public domain, they probably would be perfectly happy to help publish the data more. There are 18 Smithsonian Museums in DC alone, and pretty much every one of them has an interior map they hand you when you walk in the door.

I know that this isn’t something that will happen top down, not realistically. This is a case for something like OpenStreetMap, for a crowd-sourced approach. Because I’m tired of “4230 venue maps!” I want them for every strip mall, for every department store, for every place where I’ve ever had to follow a sign to restrooms, asked for directions, and gotten lost anyway.

Until we have that — and until I’m carrying it in my pocket — I’m never going to be able to stop thinking “Damn, I wish that I could sit in here for a week and make a damn map.”

Sequester Impact

Posted in default on March 31st, 2013 at 03:00:29

I guess there’s some sort of budget-reducing activity that happened as a result of the lack of a signed budget, which is referred to as the sequester. I was vaguely aware of this, but for the most part, it doesn’t have any impact on my life, so I didn’t really care.

That changed today, when I decided to look into what it takes to go on a White House tour: “Due to staffing reductions resulting from sequestration, we regret to inform you that White House Tours will be canceled effective Saturday, March 9, 2013 until further notice.”

Well that sucks.

(In reality, this has a minimal impact on me: I’m looking to travel about 3 weeks from now, and White House tours require notice through your congressperson a minimum of 21 days in advance. But now I know that even if I had known that and done something about it, I’d still be out of luck.)

I guess sometimes the Federal Government budget does actually have an impact on me (other than via higher taxes). I guess that means I should pay more attention to these things. (Yes, I am a terrible citizen. Whaddya want from me.)

Small World: Review

Posted in default on January 20th, 2013 at 20:25:00

Small World is a game I’ve been interested in since I saw the first episode of Tabletop; it seemed a somewhat complex, but interesting game.

When thinking up gifts this Christmas, Kristan and I decided that it would be an interesting thing to get for Julie.

Today, we played for the first time, and it was fun! It was definitely a bit long for the first game — about two hours — but I think it’ll be a lot quicker the second time around. (It’s also a type of game that I could see working well on a computer — the physical interactions with all the options requires a fair amount of interpretation that isn’t trivially internalized.)

The basic idea of the เว็บพนันบอล ดีที่สุด game isn’t much different from Risk: You have territories. You conquer other territories. As yo are conquered, you lose pieces. Once you lose enough pieces, you pick a different race and start over; the end-game goal is to get as many ‘tokens’ (game points) as possible.

The race choices you have are limited to five at any given time, rotating through a set of 15; each race gets combined with a special power — things like “Flying”, which gives the ability to conquer non-adjacent territories, or “Mounted” which lowers the cost of conquering Farmland and Hills.

Since the race + power combinations are random (shuffle the deck of cards), you can get interesting combinations. Bivouwacking gives you 5 extra defense tokens to deploy as you wish, combined with Halflings, whose initial two regions can’t be conquered at all.

For Alicia, Kristan and I, the game was quite enjoyable and picked up throughout; Julie was a bit slower to pick it up, and lagged slightly towards the end of the game, but that doesn’t surprise me with ~2 hours of back and forth.

For people who like board games, I think that Small World would be a positive addition to any gaming collection, and I look forward to the chance to play with some friends who might be interested in the future.

My Job, simply: Local Search, Up-Goer Five Edition

Posted in default on January 20th, 2013 at 16:00:39

This morning, a couple of my friends shared their job descriptions in a text editor designed to only allow you to use the 1000 (or ‘ten hundred’) most commonly used words, inspired by an xkcd comic describing the Saturn V rocket using the same conditions.

I tried to do the same, describing my job working on the Local Search team for Nokia:

I try to take the words that people type into their phones and find the places they are looking for. Sometimes people can not type very well, which makes it harder. Sometimes the places that people are looking for are not places we know about, which also makes finding them harder.

I work on finding which places we show people, and which places we do not show people. There are many people who work on this problem. They use computers from their homes to tell me which searches show the right place.

Once we know which places are right, we tell a computer to try to show the right place more often. The computer looks at all the places it knows about, and tries to guess which way it should order the places to make sure as many people find the right place as possible.

Searching for places is a hard job. You need to know about places, know how people type when they try to find places, and put the best place first on a phone.

Edited based on feedback from a friend on Facebook to change the idiomatic use of the word “return”; original.

It was actually a fair amount easier than I thought — perhaps I didn’t go into as much detail as I would otherwise, and the writing certainly feels a bit stilted, but there was no part of my job that I felt I couldn’t describe reasonably well.

Aaron

Posted in default on January 13th, 2013 at 01:58:40

Aaron Swartz was an incredible guy. He was constantly successful in making me feel completely inadequate — which is generally a pretty hard thing to do — and I can claim more success in my life than I would otherwise have had thanks to Aaron’s influence.

The world is worse off without him. My best to all his friends and family.

As a result of Aaron’s passing, I am going to change my recent practice of doing many things on Facebook only. Before, I would also have ensured that my content was made available in places that weren’t Facebook, because I felt that the freedom that other platforms offered me — as well as long term stability — were important. Of late, I have not stuck to that ideal — but the fact that I haven’t is a regression from a belief that I have always had, that sharing things only in walled gardens hurts everyone.

I think this is the kind of thing that I would have frowned upon in myself a decade ago, and there is no less reason now that it should upset me. Sharing information only in a single closed platform is bad for everyone. It’s time to go back to sticking to those principles, and making my information as free as it can be. (There are practical limits to anything, but “I’m a lazy bum” isn’t a good enough excuse.)

Responding to Recruiters: Priority List

Posted in default on October 28th, 2012 at 22:39:50

I get a handful of recruiters who are looking to find me a role in their companies. (Sometimes they are also looking for people who aren’t me to fill roles — which I usually pass on to others by saying “Anyone looking for a job?”, getting a chorus of “Nope”, and moving on.)

While responding to one of these recently, I ran down the checklist I have in my head for what is important to me in looking at a new job. I think that the list of items on this list are essentially a log-scale order of binary predictors for how likely I am to consider a switch to another position; for example, I don’t think that it’s plausible to imagine that I’d consider any position that didn’t have the first two conditions met.

  • Work from Cambridge, MA, ideally in a local office or some other employer-sponsored working space. (Things that are close enough: Cambridge, Boston. Things that are not close enough: Lexington, Waltham, Billerica.)
  • Working in a working environment which supports flexibility in work schedule, and is supportive of work/life balance.
  • Working on projects that I don’t personally consider dishonest or immoral.
  • Working with user data — the bigger the better.
  • Working on projects which are visible to the public.
  • Working on interesting new technologies, especially technologies which can be open sourced and shared.
  • Working with maps, or geospatial data.

(Compensation also plays a role, but I don’t think I’ve ever not responded to a recruiter based on that fact.)

I’m not actively looking for a job — despite Nokia’s overall poor performance, I work under the ‘Location and Commerce” group inside Nokia that is still making a healthy profit on our overall activities. Most importantly to me, I work with the same team I’ve worked with for more than 6 years now, so switching jobs would be a painful transition that is unlikely to be enticing without a really strong offer.

That said, I often read the engineering blogs of places like Yelp, Netflix, and Foursquare and think “Man, wouldn’t it be cool to work someplace where maybe I couldn’t put out fires all the time? Where occasionally, I could actually work on cool stuff?” (Note that my brief research into Netflix indicates that it fails *both* of the first items on my list, so it’s evident that “Companies doing cool things” is not synonymous with companies for whom I would want to work.)

I just miss the days of MetaCarta when occasionally, I got to put together something interesting without spending 75% of my time fighting against people inside my own company, and I dream that somewhere out there, there must be other cool companies to work for where that’s not the case. I’m not convinced this isn’t just a ‘grass is always greener’ thought, though. 🙂

(If you are looking for a senior software developer, and think your company can meet all of the criteria above and be cooler than where I work now, feel free to drop me a line.)

Some comments on EC2 instance heterogeneity

Posted in default on October 24th, 2012 at 20:39:50

An article (Exploiting Hardware Heterogeneity within the Same Instance Type of Amazon EC2) linking to a paper from HotCloud ’12 has some information about mixed instance types for Amazon EC2 machines. I found it interesting, so browsed through the article. Here are some observations I had when looking:

– “Furthermore, the high-memory instances use identical Intel X5550 processors” — Not true, from what I can tell. E5-2665 processors are used across at least us-east availability zones for all m2 instances sizes — m2.xlarge, m2.2xlarge, and m2.4xlarge. In fact, in several thousand instances spun up, it seems that these instances are used up to 70% of the time in one availability zone (though almost not at all in another.)
– The CPUBench test was done across 20 instances, but the Redis test appears to have only been done against one of each type, as far as I can read. I’m not totally convinced — given the variability in performance between node types — that this is entirely explained by instance differences — though given the CPUBench scores, it’s clear that some of the variablity could well be coming from that.

Anyway, I primarily wanted to comment on the high memory instances all using X5550s — since it’s clear that they don’t, at least not in US-East 🙂

Deep, Dark, OpenLayers History

Posted in default on May 1st, 2012 at 00:18:46

The OpenLayers that everyone knows today, born in what seems like the dawn of time of the modern Javascript age, was not born of whole cloth. The early development of OpenLayers recorded in our SVN history represents some of the very very early work in OpenLayers as it is today, but the project had a life for a year before that that is largely unknown.

I’ll admit that I’m not the best person to tell this story: Most of it is also before my time. I originally started working with the MetaCarta team in March of 2006, working on some server-side KML hacking. When I joined the company, there had already been three versions of OpenLayers.

“But Chris!”, the educated illuminati among you might say, “OpenLayers wasn’t released until May of 2006! What do you mean, there had been three versions of OpenLayers?”

Well, my friends, it’s a sad, but not shocking tale, all too common in software development: the premature demo.

After the Where conference in 2005, John Frank reached out to several interested parties to help build an open source alternative to the Google Maps API. (Or so I’m told.) MetaCarta had map based interfaces, and it was clear to John then that this new fangled mapping thing was going to be the future — not just for Google, but for all map interfaces. (In fact, John has even been credited with one of the early definitions of Slippy Map, in June of 2005: “A “slippy map” is type of web-browser based map client that allows you to dynamically pan the map simply by grabbing and sliding the map image in any direction. Modern web browsers allow dynamic loading of map tiles in response to user action _without_ requiring a page reload. This dynamic effect makes map viewing more intuitive.”

Revamping MetaCarta’s ‘enterprise’ UI to be more user friendly was the primary thing on John’s mind. Switching from a form with *11* form fields to a more understandable one-box search. Improving the experience of map interactions. But for a long time, that was essentially all it was: while there was a core idea behind each of these approaches — the idea of making an open source library out of the results, and distributing it widely — in each case, the demo came first.

Instead of concentrating on building a solid library which could be made into an open source base for many different projects, the first incarnations of OpenLayers were all libraries designed for a single application — something that never works well for creating a more general purpose tool.

This was a major misunderstanding of the market demand: one that could not be overcome by any amount of technical success. What the world needed at the time was not another client/server component; it wasn’t another application that allowed them to do pretty things with maps. When OpenLayers succeeded, it succeeded largely because it avoided solving anything other than the most basic problem; it avoided doing anything other than the one, simple thing of having a draggable map on a web page, and being able to load data from multiple sources. This was crucial to the success of the library we know as OpenLayers today.

Some of the flaws in previous iterations that I saw as a result of this:

  • Core functionality based around parsing WMS GetCapabilities documents. Although many have criticized OpenLayers for not reading WMS Capabilities documents, reading XML from a remote domain in the browser is intended to be impossible (due to the same origin policy). Though there are now common workarounds for these types of problems, at the time, this was essentially a showstopper for client-side-only deployment: a key missing ingredient in some of the early OpenLayers work. Just as browser-based platforms evolved to include interactive, real-time experiences — prompting users to find out more about crypto betting — developers also began to embrace alternative solutions for capability parsing. It was only by throwing away capability parsing—by reiterating data in more than one place—that it became trivial to use OpenLayers to talk to remote servers. Note that the problem here has nothing to do with WMS: The problem has everything to do with ‘entirely client side’ vs. ‘requiring a server-side proxy’.
  • Centralized hosting of a ‘service’ instead of an API. At one point, there was a thought that one of the things OpenLayers could provide was a ‘mapviewerservice’ — a simple, hosted way to present data online by simply modifying HTTP parameters. (I don’t think this was ever at the core of any of the OpenLayers versions that were written, but it was something we supported even after the transition to the all-public “Mark IV” of OpenLayers.) In the end, nobody at the time really wanted this.
  • Concentration on pretty. OpenLayers, to this day, is ugly as sin out of the box, and is more annoying to customize than some other solutions might be. That said, the core functionality of OpenLayers is designed to *hide*. There are very few things that OpenLayers does — and it tries to hide as much of them as possible. Several previous incarnations had a lot of user-targeted UI — making them more applications than libraries. This was a mistake. What the world needed really was a library.

(Now, I’m sure that others who were ‘there’ as it were, might have more commentary. And certainly there were many flawed aspects of technical implementation. But these were biggies at the social level, which would have prevented uptake even if the technical flaws had been worked out.)

This post is written in large part because just this last week, I had a conversation in a bar with someone who claimed he helped start a Javascript mapping project called OpenLayers. We went back and forth for a bit, and then I realized he was right: He *did* participate in something that helped set the stage for OpenLayers. (The earlier incarnations, though usable, never really were the thing that people think of as OpenLayers today.) I just didn’t know he did — and he certainly didn’t know that OpenLayers grew up, got legs, and walked away from MetaCarta and into the hands of thousands of adoring fans.

And I didn’t even know his name before last week. But last weekend, I walked into RPI, where I met a couple of college students from RCOS — people who were still in high school when OpenLayers started — and they knew what the OpenLayers project was, and were excited to meet a guy who helped get it started.

So just remember: Though the OpenLayers you know and love today was put largely put together over a three day weekend, hacking in a darkened room, with a projector on the wall, and Venkman at our side: before we got there, mistakes were made by us, and others. And even before that, a guy with a vision of easier open source maps saw a future where all maps would be slippy.

So a brief thank you, from me, to all the people who came before me in the OpenLayers history; to OpenLayers Mark 3, 2, and 1, and especially to John Frank, who helped push the project from a vision to a reality.

python SimpleHTTPServer + OpenLayers testing

Posted in default on April 27th, 2012 at 20:17:00

OpenLayers testing for new users was always felt a bit odd at things like code sprints: because the OpenLayers tests use XMLHttpRequest, Popup windows, and the like, there was always an issue of a few tests that would fail outside of being run on an HTTP server. For a product where almost all the tests pass just fine without it, I always found it sort of annoying that a few minor XMLHttpRequest restrictions forced me to shell out to a server.

This weekend, as I was helping at the OpenHatch Open Source workshop at RPI, I found myself in a position where a new developer was running the tests, and asking me why they failed. I was pointing out that in order for them to pass, they’d have to be run from a webserver, and someone else in the room helpfully pointed out that if you have Python installed, you have a webserver available to you with just one line of code.

“What?” I said, incredulously. I mean, I believed them — in the same way that python -mjson.tool has become a daily part of my life, I’m not entirely surprised by Python modules offering useful command line interactions that help make my life easier. Still, this was a new one to me.

“Sure”, came the reply. “Just use python -m SimpleHTTPServer in the directory you want to serve.”

And I `cd`’d into the root of my OpenLayers checkout, and typed python -m SimpleHTTPServer, and went to http://localhost:8000/tests/run-tests.html — and ‘lo, the tests did pass, and the developer did say it was Good.

(I probably learned more tips and tricks in the two day workshop about git, and other helpful tools, than I do in a week of doing my own development. Kids these days, teaching me new things!)

“Get off my lawn!” — How Maps + JS Have Changed

Posted in default on April 25th, 2012 at 04:44:15

Occasionally, I think back to when we started writing OpenLayers, and some of the tools we didn’t have when I started programming JavaScript. Then I feel old, and start to yell at kids to get off my lawn.

In May of 2006, when we started working on OpenLayers:

  • Internet Explorer was 63% of w3cschools web traffic. (Today? 19%.)
  • IE7 wouldn’t be released for another 5 months.
  • SVG support was only available via the Adobe SVG plugin, and only in IE on most platforms.
  • Safari was at version 1.2/1.3.
  • Firefox was not yet at version 1.5, which would bring in SVG support, but disabled by default.
  • There was no Firebug. “Real men use Venkman!” (I believe that as part of the rewrite of OpenLayers that we eventually shipped, we did bump into Firebug 0.3/0.4. 1.0 wouldn’t be released for another 6 months.)
  • jQuery was still 6 months from being released.

In addition to the JavaScript world changing, the Maps world has changed. Although I was originally interested in OpenLayers because of OpenStreetMap, there wasn’t a lot there back in 2006. That isn’t the only way the world has changed:

  • When OpenLayers started, OpenStreetMap had approximately 2000 registered users. (Today? 500,000.) At the time, there was no regular dump, and the map that existed was… ‘interesting’ 🙂 (Mapnik wouldn’t come until later.)
  • Installing PostGIS on most platforms was… touchy at best. (Things like pgRouting, though coming into existence around that time, were far from practical to install, even more than a year later.)
  • ka-Map and Community Map Builder were still the de facto web mapping software.
  • There was no one in the open source world caching XYZ tiles yet. (The FOSS4G discussion on tile caching in September of 2006 was the first real discussion of that.) TileCache was developed later that year — after a discussion where we all agreed that WMS-style strings were a good idea, and then someone left the room and immediately started talking about TMS 🙂
  • All map rendering software was somewhat difficult to install at the time — things like GeoServer’s current wonderful web UI were… not as complete then as they are now 🙂
  • Nobody knew how to render things in ‘Spherical Mercator’ so that they matched up to Google. Spatial Reference codes like 41001, 900913, 3857/3875 were all quite a ways down the road.
  • Software that hasn’t changed much: GDAL/OGR. GDAL was an extremely useful tool in 2006 — pretty much the same as it is today. Although GDAL has certainly grown many features, and more complete over the years, it still has the same general shape as it did back then. 🙂

(Other things OpenLayers predates: Twitter, open access to Facebook.)

As you would expect, the world has changed. People sometimes comment that OpenLayers feels a bit long in the tooth — something I can certainly sympathize with. I have always prioritized maintaining API compatibility for existing applications over any thing else in my personal investment in OpenLayers: the most important thing to do is not to break existing applications. This stability has allowed many people to use OpenLayers, and I don’t think that violating those principles is a good thing. (I am happy with the solution that has grown over the past 6 months in OpenLayers — moving code to the “deprecated.js” file is a great way to let people maintain backwards compatibility with a path forward as well.)

I’m happy to have other people take the principles created by OpenLayers over the past half decade and do something exciting with them. Competition is good. Options for applications are good. The fact that OpenLayers effectively sucked all of the air out of the room from 2006-2010 was not good for the rest of the web mapping world: without competition, it’s really hard for any innovation to take place.

But the fact that a piece of JavaScript software written in a world before jQuery, before Firebug, while OpenStreetMap was still getting off the ground, is still useful today — I think that’s a testament to what OpenLayers became, and I’m happy to see what it has become and continues to be for many people.