Archive for the 'Technology' Category

!!con Trip Report: Day One

Posted in Technology on May 12th, 2019 at 08:13:55

Yesterday, I spent the day at !!con (“bangbangcon”), a conference which focuses on “The joy, excitement, and surprise of computing”. The experience was everything I hoped it would be and more!

As someone who is enthused and energized by the excitement and joy of others, !!con is a mecca. The 10-minute talks have one key requirement: The talk title must include an exclamation point! (The must also be “related to computing” and “be about something you think is interesting and cool!” — all of which play a role in what talks are selected.) These requirements are core to the premise of the con: that talks should be about excitement, and it shows in every aspect of the program that is selected. From body modification to unusual game development experiences, from beginners learning about brand new technology to long-time experts exploring and sharing esoteric knowledge about how something works, the talks were all formed around the joy of discovery.

In the past, I’ve experienced !!con from a distance via the tweets of Liz Fong-Jones. Liz has done an excellent job at pulling the high points from various conferences for years, and I have seen !!con tweets go by and been saddened that I wasn’t aware of it early enough to find a way to attend. Thankfully, this year — due to a timely reminder from a coworker — I was able to get to the event. Being here in person is even more valuable because the excitement over these things is contagious, and is energizing for me all on its own.

Compared to the traditional tech conference, !!con has clearly seriously invested in and succeeded in creating a more welcoming and supportive environment where all kinds of folks feel invited and included. It’s visible from almost the first moment you walk in that this is not the audience of your typical tech conference. It’s eye opening what an attempt to create an inclusive environment can achieve. Elements like anonymized review process, a focus on first time speakers, and encouraging anyone that “finds that people like you are underrepresented at programming conferences” to apply have clearly paid off. (Of course, this is only successful because the organizing committee of the conference has clearly done work all over to make this happen, and has a demonstrated history of getting it right, as far as I can tell.) Both speakers and attendees feature a much more diverse mix, both in racial and gender diversity, than I have experienced at any tech event before this.

Another element that is a joy is the level of accessibility to the conference. This is my first event with live captioning, and as someone who has recently come to realize the extent to which I require captions to process information well, it is a joy. While I am not hard of hearing, I suffer a lot from difficulty processing accents and certain vocal ranges, as well as suffering to a certain extent from symptoms of ADHD that make some forms of processing more difficult. The captions are simply an incredible tool towards making the content more accessible, and I’m definitely committed to demanding this level of accessibility from future events I participate in.

For my own participation, yesterday I was happy to act as a facilitator of one of the breakout/”unconference” sessions: “Unions: Why You Should Have One.” I didn’t have an explicit goal, but with the broader worker solidarity movement afoot in tech, I felt like providing a space for people to chat about it was worthwhile. To my pleasant surprise, we packed the room: we had more than 30 people who came in and participated, in every stage of the process from “This seems interesting” to folks who are well down the path towards building solidarity among their workforce and building on it. Also, since I’m in a town far from home, I came into this conference without recognizing a lot of faces, and now I’ve made some new friends!

If I were to try to highlight all the talks I loved yesterday, I would just have to list every single one: Every talk had some aspect I enjoyed. That said, I can make a few special call-outs to ones that really stick out to me:

Kate Beard‘s “Letโ€™s build a live chat! ๐Ÿ‘from the 1800s (?!) ๐Ÿค”using modern web technology!!! ๐Ÿ˜ฎ” combined an amazing history of the telegraph — how it changed communications worldwide — with a desire to learn how to use the Web Audio API. The result is Morse Chat: An application that you can use to chat with your friends in Morse Code. Kate has just recently finished a 4-month coding boot camp, and is now working at the Financial Times, and was clearly excited to put those skills to work in a fun personal project to learn more about how to use web audio and websockets — and created a fun and nifty app in the process. Kate’s excitement over the app was contagious, and I absolutely loved it.

Each section of the con had a theme running through the three to four talks that were grouped together, and the game development section was definitely a big draw to me. The practical exploration of Game Feel from Ayla provided some awesome demos of what you can do (and can’t do) to make games feel better, and Sophie’s use of a hardware build to cheat at Pokemon was terrific as well — but what I really loved was Em‘s terrific story of creating a game that’s fun to play using a stationary bike.

Em has a history of creating games with unusual interactive surfaces, and recognizes that how you interact with something changes how you experience it. When they bought a bluetooth enabled stationary bike setup, they found that the existing game UI that came with it was insufficiently motivating, and decided to see what they could do to build a better game. They created a prototype application that envisions you as an UberEats style bike messenger: someone who picks up meals and delivers them to clients. Initially, with only a single input — pedal or not pedal — creating a compelling game experience turned out to be hard. The part of this talk that was most interesting to me was the process — during prototyping — where an attempt to add an additional input led them. The plan was to use turning the handlebars as a way of indicating turns. When stymied by the difficulty of combining this notion with triggering re-routing in commercial mapping UIs like Google Maps, Em found it was a development experience that wasn’t “going with the grain”, and took a step back.

After taking a step back, they realized there was a different approach to take: instead of fighting mapping APIs to implement turning down different streets, instead, they could simply give the player a different set of choices: set up multiple food pick ups and drop offs, and turn the game into a more traditional resource conservation/collection game. Now, the choices were around “Which orders am I going to pick up and drop off, and in what order” — creating a game out of solving a modified version of the travelling salesman problem. With that in mind, you create a game that has a compelling loop, without needing to fight against the development style encouraged by your tools. This step into the perspective of someone prototyping a game was absolutely one of my favorite points of the con.

In the health-related track, I loved Sarah‘s talk, “I Built an Artificial Pancreas!” While I’m not personally a health hacker/body mod type person, I have a lot of respect for those like Sarah who are. But the most interesting part of the whole thing to me was how much more effective her insulin pump is at regulating her blood sugar now: With open source software and open hardware, she was able to pull her actual values directly in line with her target 90% of the time, when before that was almost never the case. Open source software for healthier living? That’s a tagline I can get behind.

All in all, this conference is lovely. The community, the technology, the information and the sense of pure joy from the participants and the speakers are hard to describe, and I’m happy I got the chance to make it here.

Onward, to day two!

Initial Forays into 3D Printing

Posted in 3D Printing on January 19th, 2014 at 11:35:10

Last week, I received my Printrbot Plus v2.1 Kit. I sat down, and started putting it together.

The Assembly page on the kit website describes the kit as taking 6-10 hours to assemble. In the end, I spent about that getting the thing to the point where it was put together; it wasn’t all in one sitting, since I found out I was missing a part about 75% of the way through. You can see a time lapse of the initial assembly on YouTube.

Some things I learned during assembly — which almost anyone who’s built anything probably knows:

  • Lay out your parts and count them before you start. Organize them, and put them in a spot where they are out of the way of your work area, but easily accessible. You’ll save a lot of time that way, and a lot of headache when you later find out you’re missing parts you need.
  • For something like this — which required more than 100 screws — a good electric drill is a good idea. I had one, but didn’t think to use it until after my initial work; I wasted a lot of time on that.
  • The Printbot process is targeted at tinkerers. This means that there are some aspects which are… slightly more fiddly than they should be. (One of my parts didn’t fit right to start with, and I spent the first 20 minutes of my assembly process trying to fit a part into a hole that it simply couldn’t fit in.) Don’t fight it too hard; if something doesn’t work, move on and come back.

In the end, I had to buy myself a ACME threaded hex nut myself to finish the build; I got an extra 5/16″ (non-ACME) hex nut instead of the 3/8″ ACME nut I needed shipped with the bot. (I emailed Printrbot support last Sunday about this — I’ve still heard nothing about this.) I used that to finish the assembly.

Initially, I was worried about my filament feeding; you can see a video showing how it doesn’t feed straight down, but feedback on the Printrbot Talk forum suggested this is normal, and I took the next steps of doing a print the following morning.

The first print out came out … poorly ๐Ÿ™‚

The reason though was immediately obvious: early on in my print process, I couldn’t find a set screw on my y axis motor. Each layer that printed, my y-axis was slipping about 2-3mm in the wrong direction, which lead to the disaster that you can see.

On Friday night, I fixed that, and the next print was actually stacking the layers, which is good; but my bed being far from level meant that the print head was running into itself on higher layers, and created a mess.

My third print was the charm:

(Total print time was about 15 minutes.)

It produced a reasonably solid calibration print. (You can see a video of the printing process: part 1, Part 2.)

Since then, I’ve had more issues with extruder feeding, etc. but at least one of my prints actually worked, and it seems like I’m now in the same part of printing process that most hobbyists get to and stay in. ๐Ÿ™‚

In total, I spent about 10 hours building the thing, another couple hours getting it set up and working. I consider the project a success overall.

Hopefully in the next couple weeks I can finish my assembly, get things tightened up, and get a few more interesting things printed ๐Ÿ™‚

3d printer delivery: At Long Last

Posted in 3D Printing, Technology on January 10th, 2014 at 07:55:54

After ordering on Dec 13th, and the package being dropped off at the UPS Store in California on Dec 31st, my 3d printer is finally in Somerville, MA, with the expected delivery later today. While I’m not going to claim it will actually be here today — I mean, after all, it was supposed to be here yesterday as well — I am slightly hopeful, since the UPS website at least claims it is in this state.

… Crap. This means that I will soon have to follow the 122 step assembly process soon. On the plus side, only a half dozen or so of the steps have comments attached to them describing them as impossible with the materials provided, so that should be good!

When purchasing with casino utan svensk licens, I had an option of buying an assembled kit for $100 more. Given the 6-8 hour timeline for doing the build, this would almost certainly have been a financially wise course of action; however, as I told a coworker: If I can’t even sit down and spend 6 hours building the thing, when am I ever going to make time to fiddle with *actually printing something*?

AppleTV: aka AirPlay receiver

Posted in Apple TV, HDTV, Technology on January 7th, 2014 at 05:00:44

Along with the new TV, I also set up an AppleTV — a small set-top box designed to hook up to the Internet and provide some content. Or something.

I say this because I really don’t understand what AppleTV is supposed to be doing for me; it’s a walled garden of apps, with no ability to extend it — no app store, or anything like it — and I can’t understand a lot of what it is useful for. I suppose part of this is because I’ve never bought into the iTunes way of life — I don’t buy videos or music on iTunes, and I don’t even know the password for my MyAppleCloudWhatever account, so in some ways, I’m probably not an ideal candidate for the Apple way of life that the Apple TV is trying to tie into.

However, the Apple TV has proven useful for one thing that I didn’t know anything about when I set it up: AirPlay. Apple’s AirPlay started out as AirTunes, for streaming music content, and grew into a more general media (and screen) sharing technology later on. I’ve seen options for AirPlay in OS X for a number of years, but I didn’t really know much about it, so I just ignored it.

I set up the AppleTV, but wasn’t really using it — I had watched some TV while plugged into an HDMI cable directly from my Macbook Pro, but not poked at the AppleTV at all. Then, I turned on the TV… and Kristan’s computer screen was mirrored on the TV. (Apparently some apps when they go into fullscreen mode will automatically activate AirPlay in some way — specifically, the Cake Mania Main Street game appears to do this.) Prior to that, I didn’t really have any idea what AirPlay was — but suddenly, I found out that I could put whatever was on my screen on the TV with one button click.

To me, this is actually one of those times when technology actually (mostly) works: I’m watching something on my computer, and someone else in the room says “That sounds interesting, you should put it up on the TV”… and they click one button, and it goes on the TV.

Of course, it’s still software, so it’s not without it’s flaws.

  • Sometimes, in order to get sound to go to the TV, you need to restart the CoreAudio daemon; this macrumours thread describes the problem and the command line workaround: sudo kill `ps -ax | grep 'coreaudiod' | grep 'sbin' |awk '{print $1}'`
  • I actually found that the Linksys WRT54G router that I had wasn’t keeping up with the demands of running AirPlay over the wireless; even plugging the AppleTV into the ethernet was still not up to snuff, so I unboxed the Apple Airport Extreme we’ve also had lying around; switching to that cleared the issues up. (Looking at the CPU usage on the router, I think this is actually just that the chip can’t keep up with the demands of the network traffic — it was maxing out the CPU moving data around — rather than any specific software problem.

Of course, the AppleTV has other functionality — the ‘apps’ that exist on it. So far, I’ve used both the Netflix and YouTube apps on it, and neither leaves me super impressed. (Admittedly, I apparently used the ‘default’ software that came with the device; I received a Software Update a couple days later which installed about 10x as many apps, and probably changed the functionality of the apps I did have, so some of this criticism may be out of date.) The Netflix app lacked auto-play (a key feature for me, since I spend most of my time watching many episodes of television shows), and even navigating to the next episode via button presses proved more annoying than it should have been.

The YouTube app appeared to completely lack the ability to downgrade the streaming quality — it would play at whatever the highest quality level was — which just flat out didn’t function on my DSL connection, which could usually support a 720p stream, but even that wasn’t reliable. This meant that all the time on YouTube was spent buffering videos and no time actually watching them.

If the AppleTV actually supported DIAL — a spec for remote device discovery and application launching — I could imagine using it a bit more. If I could just launch the Netflix player on the Apple TV by clicking a button on my laptop, I could certainly imagine using it more, and the same with the YouTube app. AirPlay makes things easy, but it means that I’m tied to not using my computer while using AirPlay; it would be worth it to me to use the less fully featured app if I could start it more easily. (Searching via an on-screen keyboard with a 6 button remote is not particularly user-friendly.) In fact, I’m considering setting up my long-avoided Google TV (Logitech Revue) to see if it will function in this way — or possibly even going the next step and buying a Chromecast solely for this functionality. Of course, Apple and standards have never been a great friend, so I’m not surprised here, just annoyed.

To me, so far, the Apple TV doesn’t provide a lot of functionality for me. With a little bit of software support, I think it could be a much more useful device — DIAL support would be a killer app for meI don’t ever expect to use anything other than YouTube and Netflix as far as apps — the others require subscriptions I don’t have in order to be useful, or just aren’t that interesting. Without an iTunes account, I don’t see any major benefits from any shared media purchasing. However, as an AirPlay receiver for quickly sharing what’s on my screen, I think it’s a useful device to keep around, and I expect I will continue to let it have a home in my living room entertainment going forward for that reason alone.

Raspberry Pi

Posted in Raspberry Pi, Technology on January 6th, 2014 at 08:30:30

Over Christmas, I bought myself a 3D printer kit from printrbot. It didn’t arrive in time to occupy me over the holiday break, however, which meant that over the weekends, I was actually lacking in toys to play with for myself. Since I had already spent somewhat lavishly on myself — I bought the Printrbot Plus kit, which is pricier than I really should have — I was looking for a toy that would be entertaining but also relatively inexpensive.

In the end, I went with the Raspberry Pi Model B: A credit card sized ARM-based computer. Part of the reason was actually related to my 3D Printer: Since prints take a long time to run, having the Pi as an extra computer I can hook up to the printer when needed seemed opportune — but part of it was just the fact that it was relatively inexpensive and seemed like it might let me do some interesting things.

Originally, I had planned on using it as a video game emulator — something that a number of people have talked about having done, via distributions like RetroPie and the like. Unfortunately, I’ve had some trouble getting the primary platform I’m interested in emulating — NES — to run at a reasonable speed. (Amusingly, SNES emulators seem to actually be noticeably faster.) This has led to some interesting research and reading about emulators — like an article in Ars Technica about how accurate emulation requires much more resources than the original hardware, and how emulator developers should take advantage of the hardware they have available these days.

In the end, I haven’t really done that much of interest with the Pi yet — nothing I couldn’t have done using the Linux machine that I already pay for on Linode. However, it has led to me actually doing some things that I hadn’t otherwise — setting up Asterisk, for example. It’s now possible to dial a given phone number, and you’ll be logged into a conference call served from my Raspberry Pi. I also set up munin — after having serious issues with Verizon for months, I was finally able to track and confirm that I was getting packet loss to Google at regular intervals. None of this is particularly complex, but there’s also nothing pi specific about it — these are things you can do pretty easily with *any* linux computer — but I haven’t had another computer running in this house for years, and the Pi provided motivation to learn these things.

That doesn’t mean that I don’t plan to use the Pi to do things that *do* actually get benefit out of it; I made my first order to Adafruit the other day, buying a digital temperature sensor and Pi Cobbler breakout board for the Pi, as well as breadboarding bits, to be able to experiment with the GPIO pins on the Pi; I’m hoping that I can entice Julie into doing some electronics experiments with me using the Pi. (She enjoyed the toy electronics kit that we got for her a while back, but I get the feeling that she’d be more interested in more involved electronics work.)

I also still plan to use the Pi to drive 3D Printing, though I expect it will be a while before I’m in a position to actually need it — given how finicky 3d printing tends to be, I expect to be at the “Goddamnit nothing works” stage for quite a while first. (Of course, before that, I’ll have to put the darn thing together, which may or may not be sufficiently complex to cause me to give up on the whole notion.)

In short: Raspberry Pi. Cheap, Small Linux computer. For me, not a lot more or a lot less — but having a Linux computer in the house is a nice thing to have. It’s also got me interested in a few new areas of interest — emulation, home automation, and amateur electronics — which may prove more interesting than the computer itself.

Adding a new member to my home electronics: Modern Television

Posted in HDTV, Technology on January 6th, 2014 at 00:45:57

Over the Christmas break, I invited a new friend into my home electronics: a 32″, 1080p television set.

Now, most people would say “What? You’ve migrated most of your media consumption to your laptop anyway, why would you go back to having a functioning television?” And that’s a completely reasonable question. The primary reason is: for fun.

You see, I don’t want to have a television because I want to watch TV. In fact, the reason I set up the television is actually unrelated to media consumption at all — the reason I set up the TV is because I wanted to buy a Raspberry Pi, and the TV was the only display I had in the house (long story) which supports HDMI in. (I don’t even own a monitor with DVI in — only VGA — so I couldn’t even just buy a cheap adapter.)

This has resulted in me putting together a lot of little things that we’ve had floating around the house for a while, but never actually used — an Apple TV my wife bought back when we were watching TV more often (but which only supported HDMI, which our old TV didn’t have); a new Airport Extreme wireless base station — and buying some new, relatively inexpensive things as well — a broadcast television antenna, for example — and even a shift in my home internet provider.

I’m going to be trying to write a bit more about each of these things individually — why I ended up with them — but I wanted to start with the basics: The only reason I did any of this at all was basically because the only monitor I had for the $35, credit card sized computer that I had in the house was a $500 Phillips HD TV.

Somewhere in that picture, there is some irony. (Or maybe just a First World Problem.)

Technocentric Thinking

Posted in Social, Technology on June 28th, 2008 at 17:34:52

Chad writes:

I know their “motto” is “Don’t Be Evil” .. but I think it should be “Don’t Be Smart” instead.. this is some dumb thinking from Google. Trust me.. I know better than Google on how I want to download and install my software.

This is just the latest in a whole lot of similar statements I’ve seen from many people across the web in a variety of situations talking about how “I know how to manage my machine”, with the underlying meaning being something like “You should act as if people who are working with your software know how to work their computers.”

When I put it that way, does it really sound right? Is there anyone who thinks that the *majority* of users of Google Earth actually know how to run their machines? Is there anyone who thinks that it makes sense for Google to build and QA two different install mechanisms — one for technical users who know what they’re doing, and one for those who don’t?

Very few companies the size of Google do anything on a whim. I expect that some thought went into the development of the Google Earth downloader. The fact that the thinking is not centered around technically competent users is just evidence that Google doesn’t need to target the early adopters; it’s not a sign that what they are doing is ‘Bad’ or ‘Stupid’.

Technical users are few and far between in the mass market. Google Earth is targeted towards the mass market. Just like all software that is targeted towards a mass market, there is nothing ‘stupid’ about removing tools that the majority of users don’t need or care about: By doing so, you limit the number of people who are going to end up confused by your tool, and that’s not a bad thing when you care about the majority of people instead of a technical elite.

Mapserver Rendering Bug

Posted in Locality and Space, Mapserver, Software, WMS on May 6th, 2006 at 14:58:18

One of the problems I’m running into with mapserver right now is related to its rendering of LINE elements which are wider than one as they run into tile boundaries at acute angles. It seems that mapserver is drawing the centerlines for these elements up to the side of the image — but in cases where a line is approaching a boundary at an acute angle, this means that the ‘outer’ edge of the rendered line stops away from the edge.

In non anti-aliased lines, this is less visible as a problem (especially if you’re not looking at the images as tiles) because the lines just stop — and visually, it’s hard to tell if it’s at the image edge. However, it becomes very obvious in cases where anti-aliasing is on because the edges of the left and right boundary are ‘tied’ together by a curving, anti-alised line: resulting in a bubbled look at tile boundaries.

I’m not sure if this is a known bug, or something that other people have run into: I’m mostly recording it here so that I have a description of it.

Right now, it seems like the workarounds are:
* Use thinner lines (so it’s less visible)
* Don’t have roads near boundaries at acute angles — although there’s not much I can do about this one!

WMS Is Great…

Posted in Locality and Space, WMS on April 26th, 2006 at 06:14:27

WMS is great… when it works. But having all my maps go busted because a server outside of my control decides that it doesn’t want to work today (today, the MassGIS WMS server, yesterday, a TIGER/LINE source I was using) is really annoying.

Oh well. At least one of my two basemaps is up.

OpenGuides Map Changes

Posted in Javascript, OpenGuides, WebKit on January 9th, 2006 at 05:37:18

So, tonight I took it upon myself to redo the javascript behind the map for the Open Guide to Boston. The reason for this is relatively simple: when I first wrote the Google Maps interface, the guide had about 50 nodes. With that in mind, drawing them all was acceptable. As the guide got bigger, it got a bit more time consuming, but was still much easier than paging. However, the guide has recently doubled in size as I’ve increased the rate at which I pull data from (specifically for the purpose of trying to build a free database of all the churches in the area with user interaction) — up to 2400 nodes (making it the single largest Open Guide to date by pure node size, as far as I can tell).

With this change, the map was no longer just difficult to use: it was impossible. Attempting to open the page crashed my web browser.

So, I took it upon myself to learn enough JavaScript to do paging… but then decided that rather than paging, I should attempt a bit more friendly solution to start. So I started hacking, and got into the Google Maps getBoundsLatLng() function, which allows me to determine the area of the map.

There are two basic options to go from here, depending on performance you expect in various situations, and the scalability you want to have.

* Load all data. Create Javascript variables to store it all. When the map moves, iterate over the data, adding markers for only points which are not already visible, and are in the new span.
* Load only the original data from the database. All additional data to be loaded via XMLHttpRequest.

The first is obviously something that can be done entirely client side, and in fact turned out to be the (much easier than I expected) path I chose. The largest reason for this is simply ease of integration. OpenGuides code is rather convoluted (to me), and modifying it is not something that I enjoy spending a lot of time doing. I’ll do it when I need to, but I prefer to avoid it. The other option would be to simply query the database directly via Perl or some other language, without using the OG framework. This would probably be slightly faster, but with the size of data I’m using, iterating over it is a minimal time compared to the time drawing the GMaps Markers. As a result, I chose to go with the slower, less scalable, but quicker-to-implement and merge to other guides way of doing things. The biggest benefit here is that I can merge it back into the other quickly growing guides — Saint Paul is now growing leaps and bounds alongside Boston, due to help from pulling’s database, and I plan to do similar things with other guides. Also, London probably will not be able to use the mapping stuff in any useful way without this kind of code being added, so I went for the quickest thing that could possibly work.

There were a few gotchas that I had to end up avoiding:

* Removing Google Maps Markers takes *much* longer than adding them. Like, 3-4 times longer in extremely informal testing. With one or two, or even 20-30, this doesn’t matter, but with 100, this starts to be a significant barrier to user interaction.
* There are a number of words which are reserved in Javascript, but are not treated that way in Firefox. This leads to confusing error messages in Safari (although they are slightly improved in the latest webkit): If you see a ParseError, you should check the Firefox Reserved Words list — this is in one of their bugs. For example, the variable “long” is reserved, and can not be used to pass something like, say, longitude.

Discovering the ParseError, working out why it was there, and how to fix it, was greatly helped by the #webkit folks: I have never seen a more dedicated, hardworking, friendly, open and inviting group of Open Source Software developers in my history as a programmer, either in closed or open source products. Many thanks to them for helping me work through why the error was happening, and how to avoid it in the future.

Lastly (although this is not quite chronological, as I did this first), I had to modify the code to the guide to zoom in father to start, so it wouldn’t try to load so many nodes. I think a next step is to allow users to set a default lat/long for themselves, a la Google Local, from which they can start browsing, rather than always dropping them into the main map. But that’s a problem for a nother late night hack session.