Wednesday, March 19, 2008

Flash Games Directory with 50-50 Ad Revenue Sharing - FlashGameALot.com

I am pleased to announce my new Flash Games Directory, FlashGameALot.com. Like GameGum, FlashGameALot offers a 50% adverstising revenue share with game developers. It is pretty straightforward: When submitting a game, developers include their Google Adsense publisher code. Then, half of the time someone visits that game, the site shows a Google ad with its author's ad code. I invite everyone to come play the games, and developers to submit their games.

Tuesday, March 11, 2008

Imagining the Future of Computers - 8 Videos

This is a small collection of videos which illustrate what the future of computing might be like.

Intel Research: 80 core chip prototype



DNA as a programming language for the physical world.


Sensual interface:


Nokia's Morph Concept (Nanotech enhanced portable computer):


Super Ego Contact Lenses:


The PlayStation 9:


360 Degree Floating Holograms:


Jeff Han's Multi-touch Interfaces:

Thursday, February 28, 2008

Backlinks Tag Cloud Widget

I am pleased to announce the new Backlinks Tag Cloud Widget, a mashup of MakeCloud and Yahoo Pipes. You can see it in action in the sidebar of this blog, where is says 'Who Links To This Page?'. It works by using Yahoo Pipes to run a search that gets a list of backlinks, then formats it as RSS and passes it to MakeCloud. It is easy to install in Blogger/Blogspot, or any other website (just copy and paste the code in; thats it). Check it out:

Backlinks Tag Cloud Widget

Thursday, February 21, 2008

TinyML - a statically type-checked functional programming language in 700 lines of SML

I finished documenting TinyML, my compact implementation of a functional programming language. What makes TinyML special is that its source code exceptionally short, given that it implements a lexer, parser, evaluator and polymorphic Hindley-Milner type checker, and supports arbitrary algebraic recursive data types, like lists, tuples, trees and options. Best of all TinyML is open source.

Visit the TinyML page

Friday, September 14, 2007

How to add a cloud to your blog (screencast)

Here's a screencast where I demonstrate how to add a cloud to a blogger blog, using MakeCloud:



There is also a higher resolution version of How to add a tag cloud to your blog available on MakeCloud.com.

MakeCloud - add a cloud to a blog easily

I am pleased to announce my latest invention, MakeCloud. It takes any RSS feed, and turns it into a cloud, which can be embedded anywhere, such as in a blog. Other people have called it a tag cloud, but I wouldn't say it is that exactly. Tags are usually written by people, and tend to be broad categorical labels. In contrast, the cloud that MakeCloud creates has one word for each story in the RSS feed that it is given. It uses an original algorithm to determine a single word that best represents each story. MakeCloud requries no signup, so it is really fast to use.

I added a cloud for this blog. You can see it in the top right. Neato...

Wednesday, August 29, 2007

How to make an RSS feed in about a minute (Screencast)

In this screencast, I show how to make an RSS feed simply and quickly.

http://www.youtube.com/watch?v=liIdmMsJSYo

Tuesday, August 28, 2007

The Top 100 Words on Digg

As I mentioned in an earlier post (Stories with Longer Titles get More Diggs), I used the Digg API to download titles and # of Diggs for over 230,000 stories. In this post, I continue my analysis of this data with a list of the top 100 words on Digg. To make this list, I first made a list of every word that occured at least 25 times in all 233,570 story headlines. Then I sorted list by the average number of Diggs recieved by a story with a title containing a particular word. The result is a list of words that appear in the titles of stories that got large numbers of Diggs. Without further ado, here is the list:

comcast programmer craziest undercover bittorrent positions riaa accidentally impossible graph desk strangest excuse slip worried youll creatures directors abandoned buildings boyfriend ton wiretap proposed hated lol defcon oops loser censors dominate dx unix dealers lolcat phishing rubiks served russias offered norton sec bullshit glass whether lowers sit grave editing wiretapping schwarzenegger cave grocery mb liquid loud dumb ninjas branch teenage aluminum owns discovers cheney dry commit marijuana slavery fool giulianis admin impeachment tried disturbing cookies mysteries gonzales reagan cpu loses built gorgeous levitation impeach diggcom futurama couch shops objects permission throws netflix cheneys damaged patrick quebec catching photograph alberto removing

Saturday, August 25, 2007

Stories with longer titles get more diggs

Using the Digg API, I collected 233,570 story headlines from Digg, and the number of diggs for each story. One of the many interesting things that can be done with this data is to see how the length of a story's title is correlated with the number of diggs it is likely to recieve. The graph below shows a histogram of the average number of diggs a story recieved vs. the number of characters in its title, for all 233,570 stories, except those with more than 1000 diggs, or with a title of length 0 (I excluded these cases because the stories with very high digg counts skewed the averages considerably, and 0 length titles seem bogus).



From this graph, it is apparent that up to about 70 characters in length, stories with longer titles tend to get more diggs. Bear in mind that these numbers are averages, not medians, so stories with large numbers of diggs make the average seem unusually high. Regardless, the trend in the graph is very clear.

Wednesday, August 22, 2007

How I automated the process of scraping over 3,000 links from my competitor's sites

When I first created Video Lecture Database, I painstakingly added around 450 links to its database by by hand, which took about a day, and was extremely tedious. In some cases, I was able to slightly automate the process by taking a page with some links and transforming its HTML into an SQL query using some fancy text editor tricks, but I had to do it differently for every site, and it was still pretty slow. There was definitely some value in collecting those links though, as many people have visited the site. But I got sick of entering links, and visitors aren't entering anything but links to porn, so my collection of links to streaming video lectures ceased to grow... until now.

Armed with BatchMarklet, I decided to see if I could harvest all of the links from top competing websites, and a bunch more from other sources as well. The results are extremely promising- I gathered a collection of 3,115 links to streaming video lectures in a couple of hours. This post discusses the process of gathering those links.

To begin, I harvested all of the links on Video Lecture Database. To make this easier, I tweaked a few lines of code so that it would output every link on one page, then ran BatchMarklet to get them all. This took about 10 minutes and yielded a copy of the roughly 450 links that I already had.

Next, I went to the Free Science Online Blog and used BatchMarklet to collect all of the newer posts made here since I added most of them to video lecture database. There were a few places were the links were not titled with the subject of the lecture, so BatchMarklet wouldn't work well for these. Instead, I opened each of these in a new tab, then saved them with FeedMarklet. I didn't count how many links I got here, but it took about 15 minutes to scrape all of the content that I wanted.

My next stop was the Spring 2007 MSRI lecture page. It took about 3 minutes to get all of the lectures from this page. I launched BatchMarklet, then checked all, then unchecked the non-lecture links, of which there weren't very many.

After that, I went to VideoLectures. A search for the empty string gave me a page with (no joke), every thing in their database, which included every lecture, speaker, and a bunch of other stuff. I only wanted the lectures, and there were several thousand links on this page, so I didn't want to go through an check or uncheck all of the lecture or non-lecture links. Instead, I selected the section of the results that contained the lectures, pasted this into TextEdit (mac), then saved it as a WebArchive, then opened it and ran BatchMarklet on it- this gave me a list with about 1600 links to video lectures and very few non-video lecture links. It took about 10 minutes to scan through the whole 1600 link list and uncheck a few duplicates, and non-lectures.

My next stop was lecturefox. I couldn't get all of their lectures on one page, but I could get all of them on 9 separate pages. There were very few non-lecture links on these pages, so I BatchMarkleted them, check-all, then removed the few non-lecture links. It took about 5 minutes to extract all 296 links on this site.

I got 6 rather interesting links on physics from WBNL Streaming Video lectures, using a single batch action. There were a lot of links that I didn't want on this page, so I just checked the 6 boxes for the good ones.

From 101 science, I gathered 21 links to the multi-part series 'The Elegant Universe'. Getting the links took one batch action and about 1 minute.

Finally, I went to UC Berkley's Webcasts, which had links to all of their courses broken up on 12 pages, once for each semester. About half of the links on each page were not for lectures. It took about 5 minutes to collect 244 links to course pages containing several lectures each.

In total, I gathered 2575 links into a feed for streaming video lectures, 296 links from lecturefox, and 244 links from uc berkley, for a total of 3,115 links.

Conclusion

Links are the primary asset of link directories such as the ones mentioned above. Link directories can be profitable for their owners, but they need to have lots of good links to attract visitors. Gathering links from around the web and entering them into a database one at a time is slow and tedious. A much faster way to build a directory of links is to use BatchMarklet to streamline and automate the process of scraping those links from other sites.

BatchMarklet - add every link on a page to an RSS feed with one click

I wrote a new feature for FeedMarklet. I was going through and bookmarking several links on the same page by hand, which was tedious. To automate this task, I made BatchMarklet, which gathers a list of every link on the current page, then gives a checkbox for whether it should be added to an RSS feed. This new feature makes it extremely fast bring existing collections of links into FeedMarklet.

Monday, August 20, 2007

New front page for laserpirate.com

There is now a new version of the homepage for laserpirate.com. In this version, I have shifted the focus away from Lumox and tried to provide a more comprehensive list of all of the software on the site. There's a lot of cool stuff there, so check it out.

Saturday, August 18, 2007

FeedMarklet: Postmortem for a 'web 2.0' product built in 12 hours

I just finished a new site, FeedMarklet . From idea to completion, it took about 12 hours today. It is a site which allows anyone to make an RSS feed with instantaneous setup, and then provides a link, which when dragged into the browser's bookmark bar, becomes a button that saves whatever page your are currently viewing to the RSS feed. The purpose of the site is to solve a problem which often bothers me : people find interesting links all every day, but it is annoying to interupt a browsing session to save a link to most social book marking services. My goal with this site was too completely streamline the process of creating an rss feed, and adding content to it- so that it can be done using as few clicks as possible. In this post, I will talk about the process of developing FeedMarklet.

A bookmarklet is like a regular link, but instead of taking the browser to a new url, it contains some ebedded javascript code. Bookmarklets can be dragged from a page into the browser button bar. Then when they are clicked, they execute some code, which (in theory), does something useful or cool, like redirecting a wikipedia page based on whatever text is selected, or altering the current page to show table structure. Using a bookmarklet, it is possible to do several things which proved to be essential in building feedmarklet. Before I did anything else, I wrote a few simple bookmarklets that would be essential parts of feedmarklet. Here are some of them:



These little snips of code provided the starting point. The next step was to build a MySQL table for the RSS news items, and basic (no graphics) php code to get a list of news items and submit a new item. I had to search around a little to find out how to format a MySQL timestamp as a proper publication date for RSS, but the answer was to be found in the comments on php.net's date function reference.

There was a little bit of a trick to getting the output data to register as valid RSS in Safari. I initially forgot to set the content type to application/xml using the php header function. Without it, I got nasty errors for my feed in Safari, but not in Firefox. Also, as I did not provide any styling information, it was necessary to use feed:// instead of http:// to refer to the feed throughout the site, so that when those links were clicked, Firefox would treat it like a feed and not just show raw XML data. It was also necessary to escape the title and description of the RSS items using CDATA, or news items with certain characters in them would break the feed.

I had some bookmarklets and a rough RSS system. Then I set about with Photoshop, and threw together a layout and some graphics for the site. I wanted to use the graphics to help lead the user to dragging the feedmarklet into the bookmarkbar, hence the prominent black upward pointing arrow. I sliced the graphics up, compressed them, and put together an old-school tables layout (thats my style). It worked great in Firefox and Safari, but IE added extra vertical space that messed it up. After some seemingly arbitrary fiddling with the whitespace in the table code, IE formatted it right.

Quotes in titles and descriptions also caused problems, either leading to malformed PHP or malformed JavaScript. The solution here was to convert html entities to their escaped forms.

There were many other browser quirks that required some special attention. The front page of FeedMarklet.com detects IE vs Firefox and Safari, and generates different bookmarklet code to make it work for each browser. Initially, I wrote the code so it worked in Safari and Firefox, and had to fix it for IE. The issues that I ran into were the different method of getting currently selected text, and a difference in how calls to window.open are handled- IE isn't happy if there are spaces in the second argument, so i made it blank and then the code worked on all browsers.

Another issue which affected Firefox and IE but not Safari was that after clicking the bookmarklet link, it set the contents of the current page to the description text for the new item (I have no idea why). I was able to fix this by adding code to the bookmarklet to redirect to the current location, after performing the rest of its actions. This has the unfortunate side-effect of clearing form data, but oh well.

Firefox has the issue that new windows made by javascript are created under other windows. I tried a few things to bring the window to the front, but they didn't work right.

Overall, I think that FeedMarklet is a pretty cool website, considering its very rapid development. Now its a matter of getting the word out and seeing what people think about it. Enjoy --Forrest.

A 1-Click Submit Button for Reddit

Sites like Digg, Delicious and Reddit rely on users to submit links, but generally require an interuption in the normal flow of the reading experience to do so. For Digg and Delicious, this is somewhat alleviated by the existence of 1-click submit browser buttons. These are links which you can drag directly from a web page into your browser bookmark bar, and then with one click, submit the current page that you are viewing to Delicious or Digg. I looked, but couldn't find any such tool for Reddit, so I made my own. To use it, just drag this link below directly into your bookmarks bar, and you will be able to submit to reddit with one click:

submit to reddit

Enjoy.

Friday, August 17, 2007

Introducing The WizzardWiki

A while back, I wrote a wiki engine, but I have been keeping it password protected and locked away from the public, because it is too powerful and easy to abuse. Tonight, I decided to set it free, at least experimentally, so here it is:

The WizzardWiki


What, you might ask, makes The WizzardWiki different from other wikis? A couple of things:


  • It doesn't use a limited markup like most wikis, just pure HTML + a shorthand notation for linking to a topic. This is a wiki for people who know how to write HTML.

  • It renders previews of your changes in realtime, updated with every keystroke

  • It is minimal, lightweight and fast


There are many wikis out there, but most aren't designed for people who can write HTML. I built this wiki with the intention that only I would be using it, so it is designed for my needs. It lacks many essential features like file uploads and version control, but it is still interesting and useful. In particular, I find that sometimes I just need a little HTML scratch pad, and the live preview makes it very convenient. It is also possible to write javascript in the pages, and use that javascript to do powerful things. Because of this, I consider The WizzardWiki to be a development platform, in a way- anyone can go there and immediately write a new web application for it, using nothing but the site itself. For example, if I needed a calculator, but didn't have one, I could type the word 'calc' into the right hand field, press enter, then write a few lines of html and javascript to define a calculator, and then immediately use it. In fact, I did just that. Here's a link to the calculator on the wizzard wiki: WizzardWiki- calc.

Chances are if I leave this up for a little while it will be horribly spammed with porn, but I guess well see. Maybe people will put some cool stuff on it instead. Let me know what you think of the WizzardWiki.

Wednesday, August 15, 2007

How much money I made writing web games in Flash

In an earlier post, I announced the start of an ongoing experiment to see how much money can be made from advertising with web games written in Flash. There has recently been some talk about this subject, as the developer of the Flash game Desktop Tower Defense reportedly made profits that would amount to over $100,000 per year (if the game were making that much money for a whole year, which I don't think it has been). Encouraged by this result, I wrote several games and promoted and monetized them in various ways. In this post, I give an account of what I wrote, how I advertised and monetized it, and how much money I earned as a result.

The Games

First, lets look at the games that I wrote:

Cave Flyer 1.0 - this was the first game I wrote for this experiment. It is based on the simple concept of flying through an ever-shrinking cave in a ship with lots of vertical inertia. It took about 4 hours to write in ActionScript 3. The graphics are weak, and it doesn't have a high score list. Nonetheless, it is a fairly classic concept without many Flash competitors (I think because it would have been hard to write efficiently in ActionScript 2), and people seem to find the game play fairly enjoyable.

Magic Plant 1.0 - this started out as as simple animation, and then i tried to add a game mechanic to it, which turns out to be too easy to be interesting to most. On the other hand, it has a certain zen charm that is strangely appealing, and while some people hate it, some like it. It requires patience, at least. It took about 4 hours to make.

Space Miner 1.0 - after seeing people's reaction to the somewhat weak Magic Plant, I decided to try my luck with a game which had a fun mechanic and a reasonable amount of effort in the graphics. The gameplay is based on the classic crystal catastrophe, although it is somewhat less diverse. I think I enjoyed this game much more than most people who played it, as I understood how deep and insanely hard it gets by level 20, and was able to get that far. Most people seemed to get bored because the first levels were too easy. This one has an online high score list- my hope here is that it encourages people to play again and again to beat the high score. But sadly, I hold all of the top 10 scores. This game took about a 3 days to write.

Xonf 0.9 - this game is pretty sweet in my opinion, though very hard. It has a pooled particle system that avoids triggering garbage collection, which can be an issue in smooth animation with ActionScript 3. The gameplay is taken from Mars Matrix, a hardcore overhead shooter that consumed many hours of my college years, as it was on an arcade machine right next to my room. So I had been planning on writting a game with Mars Matrix mechanics for a while... another idea that has been in the works for a long time, and which I was happy to work in here, is the procedural generation of bodies and flight paths for the enemies. Golan Levin's walking things is the inspiration here- he used flash to generate random walking insects. It seemed to me that this was a natural way to construct critters for a game. Spore, the game by Will Wright, is heavily influenced by the same idea. This also has a high score list and other people beat my score. Some people really like this game, and others don't, often complaining that it is too hard. I rewrote this game from scratch 3 times to optimize the particle system, so it wound up taking about 2 weeks, on and off.

How many elements can you name in 10 minutes? - Seeking inspiration for an easy game to write that would be popular, I looked at the most dugg games in Digg's Web Games Section. How many states can you remember in 10 minutes and similar games were very popular, and seemed pretty simple, so I wrote my own variation on this theme. It took about 45 minutes to write.

Spreading the Word and Monetizing Page Views

Those are the games. Now lets look at how I distributed and monetized them.

First, all of the games are hosted on my site, in a section for original flash web games. This is just an index of all o fthe games, with thumbnails and links to a separate page for each game. On the page for each game, there are two Google Adsense skyscraper format ad blocks. For some of the pages I used only text, and others text or images (I think that image advertisements are more appropriate for games, than, say lectures on economics). I will go over the Adsense data in the next section.

To promote the links to the games on my site, I submitted each of them to Digg in the Playable Web Games section. I also bookmarked them on my delicious account.

In addition to hosting the games on my site, I uploaded them to Kongregate and GameGum, both of which are flash game portals that offer ad revenue sharing with developers. To direct some of the traffic to my games on these sites back to my own site, I put a link in the games themselves back to the index of webgames on laserpirate.com.

Results and Data: How Much Money Did I Make?

So, how much money did I make from these games? First lets look at the page impressions and other related data. Google's Adsense terms of use prohibit me from giving away specific information about # of impressions, click through rates, etc, so I will be vauge: For these games, I got somewhere between $1 and $10 of revenue from Adsense impressions. I might also note that GameGum's revenue sharing system (allegedly) works by showing your adsense ads some fraction of the time on their site, although I'm not convinced that it works. If so, I think that those adsense views are not included in this total.

I can also give data on page requests from my own server logs, but the reader should be aware that these do not directly correspond to Google impressions, because they include non-unique views, many of which came from me during developement- and probably for some other reasons too (so Google: please don't smite me, I'm not trying to break any terms of conditions).

For the months of July and August, the number of requests to my server for each of the games was:

The elements - 261
Cave flyer - 89
Magic plant - 83
Xonf -101
Space miner - 142

We can also look at viewing statistics from GameGum and Kongreate. The stats for GameGum are (in # of impressions):

The elements - 18,091
Cave flyer - 198
Magic plant - 83
Xonf -201
Space miner - 121

I am find the count for the elements game to be quite dubious. If you refresh on the GameGum site, it count as another impression, so maybe someone repeatedly refreshed. I don't think I got adsense impressions for that many views, which suggests that they were not unique.

Here's the data for Kongregate (they provide statistics in 'Game Plays' and 'Ad Impressions'; these listing are ad impressions):

The elements - 1169
Cave flyer - 1263
Magic plant - 545
Xonf -1021
Space miner - 821

This totals to 4819 impressions, for which Kongregate lists a revenue of $0.51. This means that that are paying about $0.10 per thousand ad impressions.

It is also interesting to look at the number of times each game was dugg:

The elements - 11
Cave flyer - 3
Magic plant - 1
Xonf -2
Space miner - 1

From these numbers, we can see that my recipe of "take a popular game on digg that is easy to write and clone it" seems to have worked pretty well for the elements game. Digg didn't show much love for the other games.

Now lets look at some overall totals:

Time spent writing games: a few hours a day over 3 weeks
Games written: 5

# of views of those games on my server (not to be confused with Google Ad impressions): 676
Amount of money earned with Google Adsense: between $1 and $10

# of view on GameGum: 18694
Amount of money made with GameGum: I'm not really sure. I looked at the HTML they generate to be sure that it serving my ads, but I can't see any evidense of them being served in the adsense reports.

# of views on Kongregate: 4819
Amount of money made with Kongregate: $0.51

Conclusion

As this data shows, it is entirely possible for Flash web games to return very little profit, compared to the amount of time it takes to write them (except for the elements game, which shows that copying whats popular and being lazy is effective). I certainly would have made more money if I had spent that time writing software for my employer. But if I spent 45 minutes writing the next Desktop Tower Defense, that might be another thing. It is worth considering that none of these games are truly polished like Lumox 2, so I am not at all surprised that some people found them mediocre. It would be interesting to see how this goes with a game that has excellent graphics, sound and gameplay. It takes a lot of work to write such games. Maybe more work than it is ecconomical to do. On the other hand, really high quality games are often sponsored for $100-$1000. I will probably still write a few more web games just for the fun of it, but I won't really expect to make any significant amount of money. In contrast, my shareware titles such as Lumox 2 and Ultragroovalicious have sold many copies, producing a very reasonable return on my time investment for writing them. Seth Godin said something like "it is better to try to sell a product that costs $100 to 10000 people than one that costs $1 to 1000000 people". We can view the web gaming market as something like trying to sell a product that costs $0 to infinity people.

Some good BBQ Chicken

I have been messing with BBQ chicken recipes and trying to get it as delicious as I can. It came out unusually well tonight, so here's the recipe:

- used 'chicken tenders' (breast strips), which aren't very thick
- marinated for about an hour in a small glass container with a sealed lid. shook it up a couple times.

marinade:
- about a 1/3 of a head of garlic, smashed and very finely chopped
- a big sprig of rosemary, very finely chopped.
- a couple shakes of garlic powder, pepper, and salt
- a big squirt of lemon juice from one of those squeezable bottles
- equal parts of these liquds (enough to to cover all of the chicken in the marinating container): olive oil, water, beer
- a dash of vinegar

I cooked it on a small charcoal barbeque. The coals were very hot- all completely red (I used one of those metal chimneys to light them, where you put some crumpled paper under it). The first side cooked really fast, then I spooned most of the remaining marinade onto the uncooked side of the chicken (taking care to get all of the garlic chunks), and flipped it over. Some of the marinade spilled on the coals, cooling them down, so the other side took longer to cook. I poured the little bit of remaining liquid form the marinade on this side, then covered the bbq and let it sit for a few minutes.

The chicken came out very moist, even though it still had plenty of darkened goodness. My previous chicken bbqs came out pretty dry, because I like it well done with darkened crispy stuff. I think what made it moist this time was using a lot of olive oil, cooking very fast at high temperature and using thin, small strips.

The flavor was good, although I was expecting it to taste more like rosemary, because I put a lot in. I guess rosemary becomes more mild when cooked. Anyway, I hope you enjoy this chicken recipe as much as I did.

Here's a picture of the left overs taken using the built in camera on my Macbook:

Monday, August 06, 2007

Dear Digg: Fix your broken login system

How many times has this happened to you: you're reading Digg, and you find a story that you would like to Digg or comment on, but you are not signed in. Digg informs you that you must login to digg or comment on stories, so you do so and... it has now lost where you were. In order to digg or comment on the story that you were interested in, you must re-find it on digg (assuming it is not on the front page). 90% of the time this happens, I simply don't digg or comment on the story becuase don't feel like wasting my time to find the story again. So Digg, please fix your half-assed login / session system- more people will vote and comment as a result.

Sunday, July 08, 2007

How much can I make with Flash games and Adsense?

I'm doing a little experiment. I hacked together a Flash game in a couple of hours, and put it up on my site with some Adsense on it. It's called Cave Flyer Flash Game, and it is inspired by the game Cave Ribbon for Palm Pilot, which I played in class in high school. The purpose of the experiment is to determine how much money I will make in advertising revenue. Well see...

Saturday, January 20, 2007

Video Lecture Database (beta)

I am pleased to release Video Lecture Database (beta). It contains over 450 links to video lectures, ranging from string theory to literature. Check it out... you'll learn something!

Video Lecture Database

I wrote it completely from scratch for the specific purpose of organizing a large collection of links to streaming video lectures.

- Forrest

Thursday, January 18, 2007

Lumox 2 1.1

We (Laser Pirate Squad) just released Version 1.1 of Lumox 2. It is now a Universal binary, and the loading time is much shorter. Check it out:

Lumox 2

For the nerds:
We re-implemented the sound library to stream ogg rather than preloading mp3s to bring the load time down.

Long live Lumox 2.

Thursday, November 30, 2006

Hebbalicious

I have been tinkering around with a little web app that I call Hebbalicious. It's inspired by delicious, cloud views and Hebbian learning (the idea that connections get stronger when they are used). I encourage people to play with it:


Hebbalicious



It is a fairly simple contraption: there is a list of links, sorted by how many times they have been clicked. The links are drawn with a size that is proportional to the number of times they have been clicked. After 100 links have been added, it clears all links to start fresh (but this may change in the future).

Enjoy.

Sunday, November 12, 2006

A Better Hack To Get Browser History Urls

I know where you've been descibes a technique to determine if user has been to certain URLS using a CSS hack to bypass security restrictions. Information about where a user has been is potentailly valuable. However, the above technique is limited in that it can only determine if the user has been to a hard-coded set of URLs (thus, we can tell if a user has been to yahoo.com, but we cannot get a complete list of exactly where they have been in general). In this post, I describe a hack that overcomes this limitation and gets a complete list of URLs from the browser history. This hack only works in Safari, as far as I know (it does not work in Firefox).

So, go ahead and try it (nothing evil will happen, and no data will be stored if you click on submit):
Show my browser history

The basic idea behind the hack is this:

From JavaScript, we will open a new window, and pass it a handle to the current window. Then, the new window loops through the following procedure:

while the parent window's history still has items in it:
make the parent window go back one step in history (with history.go(-1))
get the location url for the parent window

The source code can be obtained here:
hist.html source

Sunday, August 13, 2006

Failed experiments in DIY solar hot air ballooning

I decided to attempt to build a solar hot air balloon today. It didn't work. These are the details of what I did. Maybe someone will find this report useful in avoiding mistakes while attempting to construct their own solar hot air balloon.

I skated to Ralphs and got a box of 39 gal. thin black plastic trash bags and some generic "scotch" tape. On the way, I noted the temperature on a bank's marquee- 93 degrees. It was hot and sunny. The bags were 2ft 8.5in x 3 ft 8 in. I began by cutting the sealed end off of 3 bags. Then I taped those three bags together in a tube with a forth uncut bag at the end. This gave me a tube about 2 feet in diameter and at least 10 feet long. I tested this large plastic enclosure, only to find that it leaked horribly. I put a lot more tape on it. The additional tape sealed the bag fairly well, so it was pretty much airtight. I left it in the sun for a while. The air inside was noticeably hotter when I put my hand inside, but this tube was not large enough or the air was not hot enough to make it float.

The great thing about hot air balloons is that their lifting power is proportional to their volume. If we imagine a spherical balloon, increasing its radius increases the volume in proportion to the cube of the increase in radius. However, the surface area (and therefore the amount of plastic, and the weight of the balloon), only increase in proportion to the square of the increase in radius. My balloons were more like cylinders, but the same idea still applies. If the balloon is big enough, it should be able to lift itself.

So I tried to make a bigger balloon. I made a second tube just like the first one. Then I slit each tube down the side, and taped the two pieces of plastic together into one bigger tube. This new balloon had roughly twice the circumference of the first one. Again, it leaked, and again I patched it. I tried melting the plastic together in a few places with a lighter, which seemed to work fairly well. With the patches, the new bigger balloon seemed to hold air a little better, but still not perfectly. It never got off the ground, except when strong gusts blew it.

The shadows of the trees grew longer. I gave up for the day around 5:30 pm. I will try again tomorrow, after adding a bunch more tape to make sure the bigger balloon is air tight. If it works, I'll post some pictures.


Related Material:
Solar Hot Air Balloon
Hot Air Balloons
Build Your Own Hot Air Balloon
Solar Airship - Hot Air Ballon Toy



Hugg This Post | Digg This Post