Archive for the 'Technology' Category

Initial Forays into 3D Printing

Posted in 3D Printing on January 19th, 2014 at 11:35:10

Last week, I received my Printrbot Plus v2.1 Kit. I sat down, and started putting it together.

The Assembly page on the kit website describes the kit as taking 6-10 hours to assemble. In the end, I spent about that getting the thing to the point where it was put together; it wasn’t all in one sitting, since I found out I was missing a part about 75% of the way through. You can see a time lapse of the initial assembly on YouTube.

Some things I learned during assembly — which almost anyone who’s built anything probably knows:

  • Lay out your parts and count them before you start. Organize them, and put them in a spot where they are out of the way of your work area, but easily accessible. You’ll save a lot of time that way, and a lot of headache when you later find out you’re missing parts you need.
  • For something like this — which required more than 100 screws — a good electric drill is a good idea. I had one, but didn’t think to use it until after my initial work; I wasted a lot of time on that.
  • The Printbot process is targeted at tinkerers. This means that there are some aspects which are… slightly more fiddly than they should be. (One of my parts didn’t fit right to start with, and I spent the first 20 minutes of my assembly process trying to fit a part into a hole that it simply couldn’t fit in.) Don’t fight it too hard; if something doesn’t work, move on and come back.

In the end, I had to buy myself a ACME threaded hex nut myself to finish the build; I got an extra 5/16″ (non-ACME) hex nut instead of the 3/8″ ACME nut I needed shipped with the bot. (I emailed Printrbot support last Sunday about this — I’ve still heard nothing about this.) I used that to finish the assembly.

Initially, I was worried about my filament feeding; you can see a video showing how it doesn’t feed straight down, but feedback on the Printrbot Talk forum suggested this is normal, and I took the next steps of doing a print the following morning.

The first print out came out … poorly :)

The reason though was immediately obvious: early on in my print process, I couldn’t find a set screw on my y axis motor. Each layer that printed, my y-axis was slipping about 2-3mm in the wrong direction, which lead to the disaster that you can see.

On Friday night, I fixed that, and the next print was actually stacking the layers, which is good; but my bed being far from level meant that the print head was running into itself on higher layers, and created a mess.

My third print was the charm:

(Total print time was about 15 minutes.)

It produced a reasonably solid calibration print. (You can see a video of the printing process: part 1, Part 2.)

Since then, I’ve had more issues with extruder feeding, etc. but at least one of my prints actually worked, and it seems like I’m now in the same part of printing process that most hobbyists get to and stay in. :)

In total, I spent about 10 hours building the thing, another couple hours getting it set up and working. I consider the project a success overall.

Hopefully in the next couple weeks I can finish my assembly, get things tightened up, and get a few more interesting things printed :)

3d printer delivery: At Long Last

Posted in 3D Printing, Technology on January 10th, 2014 at 07:55:54

After ordering on Dec 13th, and the package being dropped off at the UPS Store in California on Dec 31st, my 3d printer is finally in Somerville, MA, with the expected delivery later today. While I’m not going to claim it will actually be here today — I mean, after all, it was supposed to be here yesterday as well — I am slightly hopeful, since the UPS website at least claims it is in this state.

… Crap. This means that I will soon have to follow the 122 step assembly process soon. On the plus side, only a half dozen or so of the steps have comments attached to them describing them as impossible with the materials provided, so that should be good!

When purchasing, I had an option of buying an assembled kit for $100 more. Given the 6-8 hour timeline for doing the build, this would almost certainly have been a financially wise course of action; however, as I told a coworker: If I can’t even sit down and spend 6 hours building the thing, when am I ever going to make time to fiddle with *actually printing something*?

AppleTV: aka AirPlay receiver

Posted in Apple TV, HDTV, Technology on January 7th, 2014 at 05:00:44

Along with the new TV, I also set up an AppleTV — a small set-top box designed to hook up to the Internet and provide some content. Or something.

I say this because I really don’t understand what AppleTV is supposed to be doing for me; it’s a walled garden of apps, with no ability to extend it — no app store, or anything like it — and I can’t understand a lot of what it is useful for. I suppose part of this is because I’ve never bought into the iTunes way of life — I don’t buy videos or music on iTunes, and I don’t even know the password for my MyAppleCloudWhatever account, so in some ways, I’m probably not an ideal candidate for the Apple way of life that the Apple TV is trying to tie into.

However, the Apple TV has proven useful for one thing that I didn’t know anything about when I set it up: AirPlay. Apple’s AirPlay started out as AirTunes, for streaming music content, and grew into a more general media (and screen) sharing technology later on. I’ve seen options for AirPlay in OS X for a number of years, but I didn’t really know much about it, so I just ignored it.

I set up the AppleTV, but wasn’t really using it — I had watched some TV while plugged into an HDMI cable directly from my Macbook Pro, but not poked at the AppleTV at all. Then, I turned on the TV… and Kristan’s computer screen was mirrored on the TV. (Apparently some apps when they go into fullscreen mode will automatically activate AirPlay in some way — specifically, the Cake Mania Main Street game appears to do this.) Prior to that, I didn’t really have any idea what AirPlay was — but suddenly, I found out that I could put whatever was on my screen on the TV with one button click.

To me, this is actually one of those times when technology actually (mostly) works: I’m watching something on my computer, and someone else in the room says “That sounds interesting, you should put it up on the TV”… and they click one button, and it goes on the TV.

Of course, it’s still software, so it’s not without it’s flaws.

  • Sometimes, in order to get sound to go to the TV, you need to restart the CoreAudio daemon; this macrumours thread describes the problem and the command line workaround: sudo kill `ps -ax | grep 'coreaudiod' | grep 'sbin' |awk '{print $1}'`
  • I actually found that the Linksys WRT54G router that I had wasn’t keeping up with the demands of running AirPlay over the wireless; even plugging the AppleTV into the ethernet was still not up to snuff, so I unboxed the Apple Airport Extreme we’ve also had lying around; switching to that cleared the issues up. (Looking at the CPU usage on the router, I think this is actually just that the chip can’t keep up with the demands of the network traffic — it was maxing out the CPU moving data around — rather than any specific software problem.

Of course, the AppleTV has other functionality — the ‘apps’ that exist on it. So far, I’ve used both the Netflix and YouTube apps on it, and neither leaves me super impressed. (Admittedly, I apparently used the ‘default’ software that came with the device; I received a Software Update a couple days later which installed about 10x as many apps, and probably changed the functionality of the apps I did have, so some of this criticism may be out of date.) The Netflix app lacked auto-play (a key feature for me, since I spend most of my time watching many episodes of television shows), and even navigating to the next episode via button presses proved more annoying than it should have been.

The YouTube app appeared to completely lack the ability to downgrade the streaming quality — it would play at whatever the highest quality level was — which just flat out didn’t function on my DSL connection, which could usually support a 720p stream, but even that wasn’t reliable. This meant that all the time on YouTube was spent buffering videos and no time actually watching them.

If the AppleTV actually supported DIAL — a spec for remote device discovery and application launching — I could imagine using it a bit more. If I could just launch the Netflix player on the Apple TV by clicking a button on my laptop, I could certainly imagine using it more, and the same with the YouTube app. AirPlay makes things easy, but it means that I’m tied to not using my computer while using AirPlay; it would be worth it to me to use the less fully featured app if I could start it more easily. (Searching via an on-screen keyboard with a 6 button remote is not particularly user-friendly.) In fact, I’m considering setting up my long-avoided Google TV (Logitech Revue) to see if it will function in this way — or possibly even going the next step and buying a Chromecast solely for this functionality. Of course, Apple and standards have never been a great friend, so I’m not surprised here, just annoyed.

To me, so far, the Apple TV doesn’t provide a lot of functionality for me. With a little bit of software support, I think it could be a much more useful device — DIAL support would be a killer app for meI don’t ever expect to use anything other than YouTube and Netflix as far as apps — the others require subscriptions I don’t have in order to be useful, or just aren’t that interesting. Without an iTunes account, I don’t see any major benefits from any shared media purchasing. However, as an AirPlay receiver for quickly sharing what’s on my screen, I think it’s a useful device to keep around, and I expect I will continue to let it have a home in my living room entertainment going forward for that reason alone.

Raspberry Pi

Posted in Raspberry Pi, Technology on January 6th, 2014 at 08:30:30

Over Christmas, I bought myself a 3D printer kit from printrbot. It didn’t arrive in time to occupy me over the holiday break, however, which meant that over the weekends, I was actually lacking in toys to play with for myself. Since I had already spent somewhat lavishly on myself — I bought the Printrbot Plus kit, which is pricier than I really should have — I was looking for a toy that would be entertaining but also relatively inexpensive.

In the end, I went with the Raspberry Pi Model B: A credit card sized ARM-based computer. Part of the reason was actually related to my 3D Printer: Since prints take a long time to run, having the Pi as an extra computer I can hook up to the printer when needed seemed opportune — but part of it was just the fact that it was relatively inexpensive and seemed like it might let me do some interesting things.

Originally, I had planned on using it as a video game emulator — something that a number of people have talked about having done, via distributions like RetroPie and the like. Unfortunately, I’ve had some trouble getting the primary platform I’m interested in emulating — NES — to run at a reasonable speed. (Amusingly, SNES emulators seem to actually be noticeably faster.) This has led to some interesting research and reading about emulators — like an article in Ars Technica about how accurate emulation requires much more resources than the original hardware, and how emulator developers should take advantage of the hardware they have available these days.

In the end, I haven’t really done that much of interest with the Pi yet — nothing I couldn’t have done using the Linux machine that I already pay for on Linode. However, it has led to me actually doing some things that I hadn’t otherwise — setting up Asterisk, for example. It’s now possible to dial a given phone number, and you’ll be logged into a conference call served from my Raspberry Pi. I also set up munin — after having serious issues with Verizon for months, I was finally able to track and confirm that I was getting packet loss to Google at regular intervals. None of this is particularly complex, but there’s also nothing pi specific about it — these are things you can do pretty easily with *any* linux computer — but I haven’t had another computer running in this house for years, and the Pi provided motivation to learn these things.

That doesn’t mean that I don’t plan to use the Pi to do things that *do* actually get benefit out of it; I made my first order to Adafruit the other day, buying a digital temperature sensor and Pi Cobbler breakout board for the Pi, as well as breadboarding bits, to be able to experiment with the GPIO pins on the Pi; I’m hoping that I can entice Julie into doing some electronics experiments with me using the Pi. (She enjoyed the toy electronics kit that we got for her a while back, but I get the feeling that she’d be more interested in more involved electronics work.)

I also still plan to use the Pi to drive 3D Printing, though I expect it will be a while before I’m in a position to actually need it — given how finicky 3d printing tends to be, I expect to be at the “Goddamnit nothing works” stage for quite a while first. (Of course, before that, I’ll have to put the darn thing together, which may or may not be sufficiently complex to cause me to give up on the whole notion.)

In short: Raspberry Pi. Cheap, Small Linux computer. For me, not a lot more or a lot less — but having a Linux computer in the house is a nice thing to have. It’s also got me interested in a few new areas of interest — emulation, home automation, and amateur electronics — which may prove more interesting than the computer itself.

Adding a new member to my home electronics: Modern Television

Posted in HDTV, Technology on January 6th, 2014 at 00:45:57

Over the Christmas break, I invited a new friend into my home electronics: a 32″, 1080p television set.

Now, most people would say “What? You’ve migrated most of your media consumption to your laptop anyway, why would you go back to having a functioning television?” And that’s a completely reasonable question. The primary reason is: for fun.

You see, I don’t want to have a television because I want to watch TV. In fact, the reason I set up the television is actually unrelated to media consumption at all — the reason I set up the TV is because I wanted to buy a Raspberry Pi, and the TV was the only display I had in the house (long story) which supports HDMI in. (I don’t even own a monitor with DVI in — only VGA — so I couldn’t even just buy a cheap adapter.)

This has resulted in me putting together a lot of little things that we’ve had floating around the house for a while, but never actually used — an Apple TV my wife bought back when we were watching TV more often (but which only supported HDMI, which our old TV didn’t have); a new Airport Extreme wireless base station — and buying some new, relatively inexpensive things as well — a broadcast television antenna, for example — and even a shift in my home internet provider.

I’m going to be trying to write a bit more about each of these things individually — why I ended up with them — but I wanted to start with the basics: The only reason I did any of this at all was basically because the only monitor I had for the $35, credit card sized computer that I had in the house was a $500 Phillips HD TV.

Somewhere in that picture, there is some irony. (Or maybe just a First World Problem.)

Technocentric Thinking

Posted in Social, Technology on June 28th, 2008 at 17:34:52

Chad writes:

I know their “motto” is “Don’t Be Evil” .. but I think it should be “Don’t Be Smart” instead.. this is some dumb thinking from Google. Trust me.. I know better than Google on how I want to download and install my software.

This is just the latest in a whole lot of similar statements I’ve seen from many people across the web in a variety of situations talking about how “I know how to manage my machine”, with the underlying meaning being something like “You should act as if people who are working with your software know how to work their computers.”

When I put it that way, does it really sound right? Is there anyone who thinks that the *majority* of users of Google Earth actually know how to run their machines? Is there anyone who thinks that it makes sense for Google to build and QA two different install mechanisms — one for technical users who know what they’re doing, and one for those who don’t?

Very few companies the size of Google do anything on a whim. I expect that some thought went into the development of the Google Earth downloader. The fact that the thinking is not centered around technically competent users is just evidence that Google doesn’t need to target the early adopters; it’s not a sign that what they are doing is ‘Bad’ or ‘Stupid’.

Technical users are few and far between in the mass market. Google Earth is targeted towards the mass market. Just like all software that is targeted towards a mass market, there is nothing ’stupid’ about removing tools that the majority of users don’t need or care about: By doing so, you limit the number of people who are going to end up confused by your tool, and that’s not a bad thing when you care about the majority of people instead of a technical elite.

Mapserver Rendering Bug

Posted in Locality and Space, Mapserver, Software, WMS on May 6th, 2006 at 14:58:18

One of the problems I’m running into with mapserver right now is related to its rendering of LINE elements which are wider than one as they run into tile boundaries at acute angles. It seems that mapserver is drawing the centerlines for these elements up to the side of the image — but in cases where a line is approaching a boundary at an acute angle, this means that the ‘outer’ edge of the rendered line stops away from the edge.

In non anti-aliased lines, this is less visible as a problem (especially if you’re not looking at the images as tiles) because the lines just stop — and visually, it’s hard to tell if it’s at the image edge. However, it becomes very obvious in cases where anti-aliasing is on because the edges of the left and right boundary are ‘tied’ together by a curving, anti-alised line: resulting in a bubbled look at tile boundaries.

I’m not sure if this is a known bug, or something that other people have run into: I’m mostly recording it here so that I have a description of it.

Right now, it seems like the workarounds are:
* Use thinner lines (so it’s less visible)
* Don’t have roads near boundaries at acute angles — although there’s not much I can do about this one!

WMS Is Great…

Posted in Locality and Space, WMS on April 26th, 2006 at 06:14:27

WMS is great… when it works. But having all my maps go busted because a server outside of my control decides that it doesn’t want to work today (today, the MassGIS WMS server, yesterday, a TIGER/LINE source I was using) is really annoying.

Oh well. At least one of my two basemaps is up.

OpenGuides Map Changes

Posted in Javascript, OpenGuides, WebKit on January 9th, 2006 at 05:37:18

So, tonight I took it upon myself to redo the javascript behind the map for the Open Guide to Boston. The reason for this is relatively simple: when I first wrote the Google Maps interface, the guide had about 50 nodes. With that in mind, drawing them all was acceptable. As the guide got bigger, it got a bit more time consuming, but was still much easier than paging. However, the guide has recently doubled in size as I’ve increased the rate at which I pull data from Zami.com (specifically for the purpose of trying to build a free database of all the churches in the area with user interaction) — up to 2400 nodes (making it the single largest Open Guide to date by pure node size, as far as I can tell).

With this change, the map was no longer just difficult to use: it was impossible. Attempting to open the page crashed my web browser.

So, I took it upon myself to learn enough JavaScript to do paging… but then decided that rather than paging, I should attempt a bit more friendly solution to start. So I started hacking, and got into the Google Maps getBoundsLatLng() function, which allows me to determine the area of the map.

There are two basic options to go from here, depending on performance you expect in various situations, and the scalability you want to have.

* Load all data. Create Javascript variables to store it all. When the map moves, iterate over the data, adding markers for only points which are not already visible, and are in the new span.
* Load only the original data from the database. All additional data to be loaded via XMLHttpRequest.

The first is obviously something that can be done entirely client side, and in fact turned out to be the (much easier than I expected) path I chose. The largest reason for this is simply ease of integration. OpenGuides code is rather convoluted (to me), and modifying it is not something that I enjoy spending a lot of time doing. I’ll do it when I need to, but I prefer to avoid it. The other option would be to simply query the database directly via Perl or some other language, without using the OG framework. This would probably be slightly faster, but with the size of data I’m using, iterating over it is a minimal time compared to the time drawing the GMaps Markers. As a result, I chose to go with the slower, less scalable, but quicker-to-implement and merge to other guides way of doing things. The biggest benefit here is that I can merge it back into the other quickly growing guides — Saint Paul is now growing leaps and bounds alongside Boston, due to help from pulling Zami.com’s database, and I plan to do similar things with other guides. Also, London probably will not be able to use the mapping stuff in any useful way without this kind of code being added, so I went for the quickest thing that could possibly work.

There were a few gotchas that I had to end up avoiding:

* Removing Google Maps Markers takes *much* longer than adding them. Like, 3-4 times longer in extremely informal testing. With one or two, or even 20-30, this doesn’t matter, but with 100, this starts to be a significant barrier to user interaction.
* There are a number of words which are reserved in Javascript, but are not treated that way in Firefox. This leads to confusing error messages in Safari (although they are slightly improved in the latest webkit): If you see a ParseError, you should check the Firefox Reserved Words list — this is in one of their bugs. For example, the variable “long” is reserved, and can not be used to pass something like, say, longitude.

Discovering the ParseError, working out why it was there, and how to fix it, was greatly helped by the #webkit folks: I have never seen a more dedicated, hardworking, friendly, open and inviting group of Open Source Software developers in my history as a programmer, either in closed or open source products. Many thanks to them for helping me work through why the error was happening, and how to avoid it in the future.

Lastly (although this is not quite chronological, as I did this first), I had to modify the code to the guide to zoom in father to start, so it wouldn’t try to load so many nodes. I think a next step is to allow users to set a default lat/long for themselves, a la Google Local, from which they can start browsing, rather than always dropping them into the main map. But that’s a problem for a nother late night hack session.

Ning!

Posted in Ning, PHP, Technology, Web Publishing on October 4th, 2005 at 05:03:05

For the past 4 weeks or so, I’ve been working on a project known previoiusly as 24 Hour Laundry.

Now, it’s no longer 24HL: Welcome Ning.

A development playground with all kinds of neat and nifty toys, Ning is attempting to do to application and code sharing what other apps have done to photos, bookmarks or other arenas. Allowing people to clone, mix, and create new apps.

There’s a lot of cool things here, and I’ve got a pretty bad headache, so I’m not going to be able to cover all the things that I would like to here, but here’s some of the cooler things about the site:

* System wide content store. Public content which is created can be accessed by any application. This content store is well abstracted, and has a content creation and query system. You don’t have to worry about scaling up: You can leave that to the professionals in the backend. At the same time, you can collect data from all the other apps in the playground. You want to create a book reviews site? First, grab everything that’s known as a Book from the site, and then use the built in classes for ratings and comments to build a discussion board. The possibilities for content mix and match are really spectacular. However, if you don’t want others touching your data, you can mark it as “private” and use it only in your app - but why would you want to?
* Built in classes for lots of things. Build a calendar. Interact with Flickr. Make a GMap. Talk to Amazon. The code’s all done for you, you just use it. Bookshelf makes extensive use of the Amazon classes, Restaurant Reviews With Maps uses Google Maps to show where you’re going — Bay Area Hiking Trails shows you how to get there.
* RSS feeds of content. The Ning Pivot is a really cool way of looking at the content flowing by, but not only can you watch it, you can watch it flow by.

There’s about a half doezn other really nifty things here that I can’t even think of at the moment because it’s 5am and I’ve been walking like a Zombie for two weeks to get this stuff complete.

But the coolest thing is:
* All data added is placed under CC By-SA license. (If you don’t like this, ning isn’t for you.)
* All app code is completely open, and you can make it your own in 2 seconds.

Screw Ruby on Rails: who needs a 2 minute app, when you can write a 2 second app? All depends on how fast you can click.

If you run into problems with ning, feel free to drop them here: You can never fix all the bugs before release, but I think that the team working on Ning has done an absolutely incredible job with all the work they’ve put together here. I’ll pass them on as best as possible.

There’s a lot of other stuff I want to write — one that others here might find interest in is how similar Ning’s content store is to RDF, and why I think that there’s no functional difference. Of course, Marc and I got into a nice “discussion” on that one on IRC the other night, so maybe I’ll wait til I’m a bit less exhausted and can adequately express my points on the topic. :)