MakerLab Blog » anselm http://blog.makerlab.com Go on, be curious Thu, 14 Mar 2013 06:30:21 +0000 en-US hourly 1 http://wordpress.org/?v=3.9.15 The King of 0,0 http://blog.makerlab.com/2010/08/the-king-of-00/ http://blog.makerlab.com/2010/08/the-king-of-00/#comments Fri, 20 Aug 2010 14:39:49 +0000 http://blog.makerlab.com/?p=906 Whatever goeth on the belly, and whatever goeth on all four, and all that have a great many feet, of every manner of crawling thing which crawleth on the earth these ye shall not eat; for they are an abomination.
*
I am the King of 0,0 and this is my ark, my ship, my home: the NaN.
Our food come from Ghana or Nigeria. We watch the political climate like the weather. On bad months we dig into the merchandise. On good months we have rum, papaya and new friends and stories to share.
Some months it almost seems worth it.
Almost like we did it on purpose.
This is a good month.
Most of the maps of this region show absolutely nothing. Many show a little hole in the ocean beneath us – that isn’t really there – it’s just sand – I know I checked. Just an artifact of where we happen to be.
Something else the maps don’t show is that just off port-side is a frothing mass; a baitball of crackling and clicking, screaming and groaning plastic and metal and synthetic polymer. Everything fighting to be where their programming says they should be. It smells awful.
But it’s why we are here – us the cynics, the thieves, the caustic casually critical practicing our ennui – measuring ourselves against you. We don’t carry two of everything, more like thousands of everything. And by everything I mean nothing. I mean all the crap that you make – all the detritus that comes swimming, paddling, with rockets and propellers, with wings and fins and webbed toes vectoring in on us at 0,0.
And the things that end up at 0,0 are starting to horrify me. What is going on with you? What the hell is going on there? You make the most unimaginable things. And most of you couldn’t program your way out of a cargo container frankly. You all are starting to scare the crap out of me.
We didn’t really need to know that somebody out there is replicating their dead family over and over; we didn’t need to know about you. More of those things still bump against the edge of the boat every few days, mewing for attention. The little blue puppettas are adorable really, but when you draw a net up of several thousand of them and they all scream ‘daddy’ in choreographed unison – let’s just say it is unsettling.
And the dogs with lights. They are giving me nightmares. They’re running around all over the deck between everybody’s legs, slobbering and repeating everything anybody says. They open their mouths and 10,000 cilia pour out; smelling sensing sampling before being slurped back up like vomit.
Thank god for friends. Today my comrades, my fighters are showing up; my bad ass super posse of righteous righteous brothers and sisters. They’ll show up in leather, in boots, inked and scarred. There will be war stories and we will drink, there will be tears and laughter. They’ll be back from Afghanistan and Somalia, Kiberia and Haiti, from the barrios, my brothers my sisters. There is so much to do – so many incredibly fucked up situations – so many foundations, non-profits and government NGO’s to circumvent. Warlords to placate, communities to self-organize. Teach them how to speak for themselves; to police themselves to see themselves – because if you don’t then somebody offers to do it for them – and then things really suck. Kids have their lives swiped from them before they can really even phrase the question. All the wanna be cops, all the thugs, all the egoists, the sociopaths, the power-hungry, the liberals – the exact people who shouldn’t be given power – come in to saite their own insecurities their own need for power control and attention. We’ll talk about money, how to destroy it, how to make it so it cannot be raped and ripped and stolen from the land, from the people, from their eco-systems.
We’ll meet, talk, share tactics – and then they’ll go back out and fight the good fight.
I will stay here however – since, as I mentioned – I am the King of 0,0. And frankly I’m afraid of you all. You scare the crap out of me – did I mention that?
Anyway I’m too old for these battles – and I’m not putting on a butoh suit goddamit. Those wars are for you kids. Just let me die before things really go to hell.
Today we pulled up several thousand phones; they were wriggling through the sea – totally confused – their brute force geo-location strategy completely failed. I tied one on and it seemed pretty good – lots of cpu – really fast – pretty hard to hack – we had to swizzle their udid’s and then log in as ourselves – very new – I might even keep a few for a week or two. We did find a few choice morsels of real value as well; a very nice spatial reconstruction algorithm for registering an AR view against the real world. An improved micro advertising attack. We scavenged a very nice social network graph to try exploits against too. We’ll probably try sell the hardware on EBay – drop them off the side later on today with some better programming – but we’re really after the code. Did I mention most of you can’t program shit? Really what is wrong with you all? It’s a basic core literacy – honestly. What are they teaching you these days?
You know what you all need is some personal responsibility. It is like some kind of liberal plague – all criminals are innocent, all guilt is explained away by genetics or situation or social fabric. Where’s the sense of right and wrong? Where’s your compass? You can’t just program your way out of your own personal anxiety. You all need to take responsibility. When you fuck up you have to make things right. Don’t placate yourself with a fresh anxiety, a novelty, or some skufla some kitchy gift – actually make it right – dig down to the heart of the matter and fix it. It’s the one thing you can do that nobody else can do.
It’s really about knowing your triggers – you know we all have them. All this shit that shows up here – it’s like a raw stream of psychic sewage.
The houseboat you made – it’s here if you want it – complete with a glass bottom swimming pool so you can see the dead sea you created. The library of fake books with the sand floor, the indoor terrarium, the replica night sky. The giant squid with the penis tentacles (that thing is disgusting frankly). The world isn’t just an endless series of fetish objects; everything lensed and warped to service your animistic drives – it’s not just there to service you – it exists for it’s own reasons. Squid are supposed to be squid – not a homoerotic fantasy. Houses are supposed to be places to curl up briefly away from the world before re-engaging – they aren’t supposed to replace the world. Can’t you see that? What is wrong with you?
The prosthetics in general make me want to cry. Did I really need to know that you wanted to be taller with larger breasts and a narrower waist? Did I really need to know that you are hiding your faces behind neutral masks? I’ve locked them in one of the holds; they spend their time endlessly copulating – walking about hips swinging in perfect figure eights; some doing ecstatic dance and contact improv; beautiful, barren and completely devoid of any human imperfection, meaning and history – no stilted motion due to car accidents or arthritis, no stories of working through repressed memories or a struggle with mind/body duality – all of that human quality is lost – they are like looking at the stars at night – beautiful and completely empty. I might burn them all rather than let anybody have them. They frighten me. I don’t even want to know what you are becoming. And I’m tired of all the art looking at me.
The Maker Edict still holds apparently; at least these things are not self-replicating as far as I can tell.
The butoh prosthetics are even more troubling. Did I really need to know that so many of you aren’t even happy with yourselves let alone your planet? Wasn’t it good enough that you were alive, that you breathe, that the fragility and humanity reflected in your faces was the whole point? Look, butoh used to mean challenging assumptions. It is no longer grotesque or taboo; it wasn’t supposed to be the baseline – grotesque is not a baseline. You’re not supposed to constantly challenge your existence every day – totally loosed free of any anchor, morality or purpose. What is wrong with you all?
The Augmented Reality views that I keep pulling off the phones are the worst; scoring your reality by a baseline matrix of socially acceptable values – taking on the viewpoint of your boss or your peers – so that you don’t have to think for yourself – so that your choices, your decisions, your gestures animation and actions all reflect a body language of concordance. Ok, body dimorphism I can understand – you know we all play with gender and identity. But a flock of birds is not a bird. Our minds are the most radically divergent space between us – it is the brilliance of others that we cherish; the radical evaluation – the points of view that differ – that is almost the whole point. When you’re not somebody else – accept that – why do you want to take the same path as somebody else anyway? The whole point is to explore all possible options in parallel – that’s the point of diversity – the point is to keep exploring the question of why we even exist to ask questions. Don’t insult God.
Speaking of God – I am waiting for God.
He doesn’t exist… yet.
But I’m sure somebody will make him too… I’ve seen almost everything else.
He’ll come swimming up to the boat; the same victim of bad programming as everything else – some hapless creation of some mythologically misguided savant, a priest a rabbi a child. And he’ll be pissed, righteously indignant wielding freaking thunderbolts and laying waste to everything around him.
He’ll be pissed that you’re wasting all of this time squandering it on a personal angst about what you crave, need, miss, hunger for, want. That you are so simian. That your buttons are so naked and raw and that they are so very easy to push. I’m not saying go all buddhist on this shit – but try step back just a little bit from the wheel of life – you’ve got a grip so tight that it is spinning you around like a top.
Maybe God will pick a new plague, not fire, not water but something else, something amusing only to him. Maybe if he killed you all it might just be a good start.
Really that’s all I have to say.
Well. Also I will say there’s a big band of activists out there; people who are engaged, who are giving a shit, who are focused outwards not inwards.
I’m the king of 0,0 and I sit here in between those two places. I don’t even know anymore.
I just know that a lot of people are working hard to try make things better – to try save what is left – and you could be one of them. Put down your toys, volunteer, get out of the house, get out of your head. It’s the only thing moving as fast as the collapse of our ecosystems – our compassion, our power, our ability to see, build, participate and create.
I just hope you get your act together soon.
Until then we’ll keep fishing your vanities them back to you.

There’s nothing else moving down there anyway.

http://premioterna.it/pt01/upload/big4905e9e3a03c0.jpg

http://premioterna.it/pt01/upload/big4905e9e3a03c0.jpg

Whatever goeth on the belly, and whatever goeth on all four, and all that have a great many feet, of every manner of crawling thing which crawleth on the earth these ye shall not eat; for they are an abomination.

ball

I am the King of 0,0 and this is my ark, my ship, my home: the NaN.

Our food come from Ghana or Nigeria. We watch the political climate like the weather. On bad months we dig into the merchandise. On good months we have rum, papaya and new friends and stories to share.

Some months it almost seems worth it.

Almost like we did it on purpose.

This is a good month.

Most of the maps of this region show absolutely nothing. Many show a little hole in the ocean beneath us – that isn’t really there – it’s just sand – I know I checked. Just an artifact of where we happen to be.

Something else the maps don’t show is that just off port-side is a frothing mass; a baitball of crackling and clicking, screaming and groaning plastic and metal and synthetic polymer. Everything fighting to be where their programming says they should be. It smells awful.

But it’s why we are here – us the cynics, the thieves, the caustic casually critical practicing our ennui – measuring ourselves against you. We don’t carry two of everything, more like thousands of everything. And by everything I mean nothing. I mean all the crap that you make – all the detritus that comes swimming, paddling, with rockets and propellers, with wings and fins and webbed toes vectoring in on us at 0,0.

And the things that end up at 0,0 are starting to horrify me. What is going on with you? What the hell is going on there? You make the most unimaginable things. And most of you couldn’t program your way out of a cargo container frankly. You all are starting to scare the crap out of me.

We didn’t really need to know that somebody out there is replicating their dead family over and over; we didn’t need to know about you. More of those things still bump against the edge of the boat every few days, mewing for attention. The little blue puppettas are adorable really, but when you draw a net up of several thousand of them and they all scream ‘daddy’ in choreographed unison – let’s just say it is unsettling.

And the dogs with lights. They are giving me nightmares. They’re running around all over the deck between everybody’s legs, slobbering and repeating everything anybody says. They open their mouths and 10,000 cilia pour out; smelling sensing sampling before being slurped back up like vomit.

Thank god for friends. Today my comrades, my fighters are showing up; my bad ass super posse of righteous righteous brothers and sisters. They’ll show up in leather, in boots, inked and scarred. There will be war stories and we will drink, there will be tears and laughter. They’ll be back from Afghanistan and Somalia, Kiberia and Haiti, from the barrios, my brothers my sisters. There is so much to do – so many incredibly fucked up situations – so many foundations, non-profits and government NGO’s to circumvent. Warlords to placate, communities to self-organize. Teach them how to speak for themselves; to police themselves to see themselves – because if you don’t then somebody offers to do it for them – and then things really suck. Kids have their lives swiped from them before they can really even phrase the question. All the wanna be cops, all the thugs, all the egoists, the sociopaths, the power-hungry, the liberals – the exact people who shouldn’t be given power – come in to saite their own insecurities their own need for power control and attention. We’ll talk about money, how to destroy it, how to make it so it cannot be raped and ripped and stolen from the land, from the people, from their eco-systems.

We’ll meet, talk, share tactics – and then they’ll go back out and fight the good fight.

I will stay here however – since, as I mentioned – I am the King of 0,0. And frankly I’m afraid of you all. You scare the crap out of me – did I mention that?

Anyway I’m too old for these battles – and I’m not putting on a butoh suit goddamit. Those wars are for you kids. Just let me die before things really go to hell.

Today we pulled up several thousand phones; they were wriggling through the sea – totally confused – their brute force geo-location strategy completely failed. I tied one on and it seemed pretty good – lots of cpu – really fast – pretty hard to hack – we had to swizzle their udid’s and then log in as ourselves – very new – I might even keep a few for a week or two. We did find a few choice morsels of real value as well; a very nice spatial reconstruction algorithm for registering an AR view against the real world. An improved micro advertising attack. We scavenged a very nice social network graph to try exploits against too. We’ll probably try sell the hardware on EBay – drop them off the side later on today with some better programming – but we’re really after the code. Did I mention most of you can’t program shit? Really what is wrong with you all? It’s a basic core literacy – honestly. What are they teaching you these days?

You know what you all need is some personal responsibility. It is like some kind of liberal plague – all criminals are innocent, all guilt is explained away by genetics or situation or social fabric. Where’s the sense of right and wrong? Where’s your compass? You can’t just program your way out of your own personal anxiety. You all need to take responsibility. When you fuck up you have to make things right. Don’t placate yourself with a fresh anxiety, a novelty, or some skufla some kitchy gift – actually make it right – dig down to the heart of the matter and fix it. It’s the one thing you can do that nobody else can do.

It’s really about knowing your triggers – you know we all have them. All this shit that shows up here – it’s like a raw stream of psychic sewage.

The houseboat you made – it’s here if you want it – complete with a glass bottom swimming pool so you can see the dead sea you created. The library of fake books with the sand floor, the indoor terrarium, the replica night sky. The giant squid with the penis tentacles (that thing is disgusting frankly). The world isn’t just an endless series of fetish objects; everything lensed and warped to service your animistic drives – it’s not just there to service you – it exists for it’s own reasons. Squid are supposed to be squid – not a homoerotic fantasy. Houses are supposed to be places to curl up briefly away from the world before re-engaging – they aren’t supposed to replace the world. Can’t you see that? What is wrong with you?

The prosthetics in general make me want to cry. Did I really need to know that you wanted to be taller with larger breasts and a narrower waist? Did I really need to know that you are hiding your faces behind neutral masks? I’ve locked them in one of the holds; they spend their time endlessly copulating – walking about hips swinging in perfect figure eights; some doing ecstatic dance and contact improv; beautiful, barren and completely devoid of any human imperfection, meaning and history – no stilted motion due to car accidents or arthritis, no stories of working through repressed memories or a struggle with mind/body duality – all of that human quality is lost – they are like looking at the stars at night – beautiful and completely empty. I might burn them all rather than let anybody have them. They frighten me. I don’t even want to know what you are becoming. And I’m tired of all the art looking at me.

The Maker Edict still holds apparently; at least these things are not self-replicating as far as I can tell.

The butoh prosthetics are even more troubling. Did I really need to know that so many of you aren’t even happy with yourselves let alone your planet? Wasn’t it good enough that you were alive, that you breathe, that the fragility and humanity reflected in your faces was the whole point? Look, butoh used to mean challenging assumptions. It is no longer grotesque or taboo; it wasn’t supposed to be the baseline – grotesque is not a baseline. You’re not supposed to constantly challenge your existence every day – totally loosed free of any anchor, morality or purpose. What is wrong with you all?

The Augmented Reality views that I keep pulling off the phones are the worst; scoring your reality by a baseline matrix of socially acceptable values – taking on the viewpoint of your boss or your peers – so that you don’t have to think for yourself – so that your choices, your decisions, your gestures animation and actions all reflect a body language of concordance. Ok, body dimorphism I can understand – you know we all play with gender and identity. But a flock of birds is not a bird. Our minds are the most radically divergent space between us – it is the brilliance of others that we cherish; the radical evaluation – the points of view that differ – that is almost the whole point. When you’re not somebody else – accept that – why do you want to take the same path as somebody else anyway? The whole point is to explore all possible options in parallel – that’s the point of diversity – the point is to keep exploring the question of why we even exist to ask questions. Don’t insult God.

Speaking of God – I am waiting for God.

He doesn’t exist… yet.

But I’m sure somebody will make him too… I’ve seen almost everything else.

He’ll come swimming up to the boat; the same victim of bad programming as everything else – some hapless creation of some mythologically misguided savant, a priest a rabbi a child. And he’ll be pissed, righteously indignant wielding freaking thunderbolts and laying waste to everything around him.

He’ll be pissed that you’re wasting all of this time squandering it on a personal angst about what you crave, need, miss, hunger for, want. That you are so simian. That your buttons are so naked and raw and that they are so very easy to push. I’m not saying go all buddhist on this shit – but try step back just a little bit from the wheel of life – you’ve got a grip so tight that it is spinning you around like a top.

Maybe God will pick a new plague, not fire, not water but something else, something amusing only to him. Maybe if he killed you all it might just be a good start.

Really that’s all I have to say.

Well. Also I will say there’s a big band of activists out there; people who are engaged, who are giving a shit, who are focused outwards not inwards.

I’m the king of 0,0 and I sit here in between those two places. I don’t even know anymore.

I just know that a lot of people are working hard to try make things better – to try save what is left – and you could be one of them. Put down your toys, volunteer, get out of the house, get out of your head. It’s the only thing moving as fast as the collapse of our ecosystems – our compassion, our power, our ability to see, build, participate and create.

I just hope you get your act together soon.

Until then we’ll keep fishing your vanities out of the sea and selling them back to you.

There’s nothing else moving down there anyway.

]]>
http://blog.makerlab.com/2010/08/the-king-of-00/feed/ 0
Augmentia Redux http://blog.makerlab.com/2009/11/augmentia-redux/ http://blog.makerlab.com/2009/11/augmentia-redux/#comments Thu, 19 Nov 2009 04:17:49 +0000 http://blog.makerlab.com/?p=846 Text of AR Presentation for Dorkbot San Francisco

Anselm Hook
http://github.com/anselm
http://twitter.com/anselm

Quick notes for tonite:

Tomorrow night @sidgabriel is going to do http://www.meetup.com/augmentedreality with the extended hyper-posse. Please join. As well, if you really haven’t had enough then on December 5th is @ARDevCamp at the @hackerdojo as well. If you want to go to @ardevcamp it is free but you MUST register here. If that isn’t enough for y’all then I really can’t help you – we tried really hard to crush your meme.

Also – more personally – if you hate this then you’ll probably also enjoy hating this earlier post on Augmentia and this post on the Aleph No-Op and this one might push you over the edge if those fail Bimimetic Signaling .

Here we go….:

Are you going in or are you trying to get out?

Are you going in or are you trying to get out?

Augmented Reality Redux

Who else wants to or is playing with AR apps? I’m assuming that most people are at least familiar with Augmented Reality.

Recently I started exploring this myself and I wanted to share my experiences so far.

As I’ve been working through this  I’ve kind of ended up with a “Collection of Curiosities”. I’ll try to shout them out as I go as well.

My hope is to to encourage the rest of you to make AR apps also – and go further than I’ve gone so far.

What is Augmented Reality?

If I had to try to define it – I’d say Augmented Reality app is an enhanced interaction with everyday life. It takes advantage of super powers like computerized memory, and data driven visualization to deliver new information to you just-in-time.  Ideally connected to and overlaid on top of what we already see, hear and feel.

Beyond this it can ideally watch our behavior and respond – not just be passive.

Observation #1: Of course there’s nothing really new here – our own memories act in a similar way – the fact that we can all read, decipher signs, symbols signifiers, gestures – we are already operating at a tremendous advantage. As my friend Dav Yaginuma says  “a whole new language of gestural interaction may emerge”. Perhaps we’ll get used to seeing people gesticulating wildly on the streets at thin air – or even avoiding certain gestures ourselves because they will actually make things happen ( as Accelerando hints at in their Smart Air idea ). So what makes it interesting is that the degree of transition is like a phase transition between ice and water or between water and air as Amber Case puts it.

What are some examples?

1) You could be bored – walking down the street and see that a nearby creek could use some help. So you spend an hour pulling garbage out of it. And somehow understand that that was doing some good.

2) You could in an emergency situation, outside and maybe it is starting to rain, and maybe you’ve lost track of your friends. An AR view could hint at which way to go without being too obtrusive.

3) You could walk into a bookstore, pick up a book, and have the jacket surrounded by comments from your friends about the book.

Observation #2: The placemarks and imagemarks in our reality are about to undergo that same politicization and ownership that already affects DNS and content. Creative Commons, Electronic Frontier Foundation and other organizations try to protect our social commons. When an image becomes a kind of hyperlink – there’s really a question of what it will resolve to will your heads up display of mcdonalds show tasty treats at low prices or will it show alternative nearby places where you can get a local, organic, healthy meal quickly? Clearly there’s about to be a huge ownership battle for the emerging imageDNS. Paige and I saw when we built the ImageWiki last year and it must be an issue that people like SnapFish are already seeing.

sift-features-s

My own work so far:

My own app is a work in progress and I’ve posted the source code on the net and you can use it if you want. Here’s what my first pass looked like about a week ago:

First of all I am being motivated by looking for ways to help people see nature more clearly. I’m concerned about our planet and where we’re going with it.  Also I’m a 3d video games developer so the whole idea of augmenting my real reality and not just a video game reality seemed very appealing.

My code has two parts 1) server side and a 2) client side.

My client side app:

On the client side I let you walk around and hold the phone up and decorate the world with 3d objects.

1) You can walk around and touch the screen and drop an object
2) You can turn around and drop another object
3) If you turn back and look through the lens you can see all the objects you dropped.
4) If you go somewhere else you can see objects other people dropped.

Here is a view of it so far :

View of my AR app work in progres

View of my AR app work in progres


There were a lot of code hassles that I will go into below but basically loading and managing the art assets was the largest part of the work. Here is my dinosaur test model and in my warehouse scene:

Dinny from some dude who posted it to Google Sketchup Warehouse

Dinny from some dude who posted it to Google Sketchup Warehouse

blender-_users_anselm_dinoworldblend

Observation #3: I first was letting the participants place objects like birds and raccoons. But I ended up switching to gigantic 3d dinosaurs (just like the recently released Junaio app for iphone). The longitude and latitude precision of the iphone was so poor and I needed something really big that would really “work” in the shared 3d space. I suggest you design around that limitation for your projects too.

Oddly it was kind of coincidental but it shows the hyper velocity of this space – that something I was noodling was actually launched live by somebody else before I could even finish playing around – hat’s off to Junaio:

Junaios app

Junaio's app

My server side app:

On the server side – I also started to pull in information from public sources to enhance the client view. I used two data sources – Twitter and Waze.

Geolocating some Twitter Data to help with an AR view.

Geolocating some Twitter Data to help with an AR view.

Twitter has a lot of geo-locatable data and I started sucking that in and geo-locating it.

Curious observation #4: I found that I had to filter by a trust network because there is so much spam and noise there. So it shows how this issue of trust is still unsolved in general and how important social filters will be. Spam is annoying enough already – you sure don’t want it in a heads up display. Here’s a tip for AR developers – NEVER SHOW STARBUCKS!

Also I started collecting data from Waze (which is an employer). Waze does a real time crowd sourced car navigation solution for the iphone. Crowd source traffic accident reports for example. They don’t have a formal public API but they do publish anonymized information about where vehicles are. So I am now trying to paint contrails onto the map in real time to show a sense of life. I don’t have a screen shot of that one yet – but here’s the kind of data I want to show:

Waze data

Waze data


Observation #5: Even twitter started to feel kind of not real time. So what was interesting here was to show literally real time histories and trajectories. It seems like this means animation, and drawing polygons and lines – and not just floating markers. I imagine an AR is a true enhancement – not just billboards floating in space. It started to feel more video game like and I feel AR comes more from that heritage than GIS

Overall Work Impressions

When I was at the Banff Centre earlier this summer a pair of grad students had an AR golf game. The game did not know about the real world and so sometimes you’d have to climb embankments or even fences to get at the golf ball if you hit it into the wrong area. This area is VERY rugged – their game was sometimes VERY hard to play:

Banff Centre

Banff Centre

What was surprising to me is the degree of perversity and the kind of psycho-geography feel that the overlay creates – the tension is good and bad – you do things you would never do – the real world is like a kind of parkour experience.

In my project I had some similar issues. I had to modify my code so that I could move the center of the world to wherever I was. If I was working in Palo Alto, and then went to work in SF – all of my objects would still be in Palo Alto – so that was really inconvenient – and I would have to override their locations. I just found that weird.

I also found it weird how even if I facing a different direction in a room – it affected how long it took to fix an issue. Sometimes I wouldn’t see something, but it was just because it was behind me and I happenend to be sitting a different way.

Curious Observation #6: Making projects about the real world actually changes how you think. Normally if you are sitting at a desk and working, certain kinds of things seem important. But if you actually are standing and interacting and moving through real space And trying to avoid cars and the like. Then the things you think are important are very different. I think it is hard to think about an AR game or project if you are inside. And I think you have to playtest your ideas while actually outside, and try to remember them long enough to actually make a bit of progress, then go back outside and test them.

Implications redux

What’s funny about AR is that almost everybody is doing a piece of it – and now everybody has to talk. There are GIS people who think of AR as their turf. There are video game developers who think of AR as just a variation of a video game. There are environmentalists and privacy people and all kinds of people who are starting to really collide over the topic. All of the ambient and ubicomp people also see it as just a variation of what they’ve already been doing.

It’s also about action. At home in front of a computer you pretty much have most of what you need. In the real world you’re actually hungry, or bored or want to help on something – so it’s bringing that kind of energy – an Actionable Internet.

And spam in AR is really bad. Spam on the web or in email is mildly annoying – spam in AR is completely frustrating.

The crowd-sourcing group mind aspect is pretty interesting as it applies to the real world. It’s going to make it pretty hard to be a police officer – or at least those roles are going to have to change. I imagine that officers might even tell people where they’re placing radar guns so that people can help by being more cautious in those areas.

I also really like the idea of how it is a zero click interface – I think that’s really appealing. I use my phone when I am walking around all the time – but it can be frustrating being distracted. I kind of imagine the perfect AR interface that shows me one view only and only of what I really care about – and I think that’s probably where this is going. It is not like the web where you have many websites that you go to . I think it will be more just one view – and everything in that view ( even though those things come from different companies or people ) will have to interact with each other amicably. I’m really curious how that is going to be accomplished.

Also – I think it’s not just a novelty.  As I was working through this I started to see what other people were doing more clearly. And I started to get the strong impression that folks like Apple and Google are actually not just aiming to provide better maps or better data but actually trying to aim at a heads up displays where local advertising was blended into the view. I get the sense that there’s a kind of hidden energy here to try and own what we see of as the “view” of our reality – so I expect the hype to actually get even bigger than it is now.

AR involves our actual bodies. In a video game you’re not at risk. Even the closest thing I can imagine – an online dating site – your profile can be anonymous. But if you’re dealing with an AR situation there are real risks; stalking, traps, even just hunger, boredom and exhaustion.

Conclusion of cursory comments

There is something about how it implicates our real bodies. I guess I don’t really know or understand yet but I am finding it fascinating. And I also find it much closer to my deep interests in the world, and in being outside, standing up and interacting with a rich rich information space rather than just a computer machine space. I appreciate computers but I also love the real world and so mixing the two seems like a good idea to me. If we are our world and our world is our skin then in a sense we’re starting to deeply scrawl on our skin. What could possibly go wrong?

I asked Google to find something that would indicate this idea of writing on skin – and this is what Google told me was going to happen. I’m sure it will be alright.

Just seemed appropriate at the time.

Just seemed appropriate at the time.

More Technical Code Parts and Progress

The entire work so far took me about a week and a half. This was in my spare time and it was quite a lot of late nights. I had just started to learn the iPhone and I am sure as the weeks progress this work will get much better much faster.

The overall arc was something like this:

0) NO INTERFACE BUILDER. The very first piece of code I wrote was to build iPhone UI elements without using their ridiculous interface builder. IB means that people who are building IPhone apps cannot even cut and paste code to each other but must send videos that indicate how things connect. The entire foundation of shared programming is based on being able to “talk” about work – and having to make or watch a video – at the slow pace of video time – is a real barrier. So for me my first goal was to programmatically exercise control of the iPhone. This took me several days and was before I started on the application proper. Here is one case where I would have largely preferred to use Android.

1) SENSORS. I wrote a bare bones OpenGL app without any artwork and read the sensors and was able to move around in an augmented reality space and drop boxes. This took me a couple of days – largely because I didn’t know Objective C or the iPhone. On the bright side I know know these environments much better than I ever wanted to.

2) ART PIPELINE. First I tried using OpenGL by hand but this became a serious limitation in terms of loading and managing art assets. I tried a variety of engines and after much grief settled on the SIO2 engine because it was open source and the least bad.  Once I could actually load up art assets into a world I felt like I had some real progress. This only took me a day but it was frustrating because I am used to tools that are of a much higher caliber coming from the professional games development world.

3) MULTIPLE OBJECT INSTANCING. One would imagine that a core key primary feature of a framework like SIO2 would be to allow multiple instancing of geometry – but it had no such feature – it was surprising. I largely struggled with my own inability to comprehend how they couldn’t see it as a highest priority. Some examples did do this and I cut and pasted code over and removed some of the application specific features. I still remained surprised and that ate up an entire two days to comprehend – I basically was looking for something that wasn’t built in…  still surprises me actually.

4) CAMERA POLISH AND IN HUD RADAR. I spent a few hours really polishing the camera physics with an ease in and ease out to the target for smoother movement, and making the camera position and orientation explicit ( rather than implicit in the matrix transform ). This was very important because then I was able to quickly throw up a radar view that gave me a heads up ( within the heads up ) of my broader field of view – and this helped me quickly correct small bugs in terms of where artifacts were being placed and the like.

5) SERVER SIDE. While working in Banff at the Banff Centre briefly this summer I had written a Locative Media Server. It itself was basically a migration of another application I had written – a twitter analytics engine called ‘Angel’. Locative is a stripped down generic engine designed to let you share places – in a fashion similar to bookmarking. I dusted off this code and tidied it up so that I could get a list of features near a location and so that I could post new features to the server. While crude and needing a lot of work – it was good enough to hold persistent state of the world so that the iPhone clients could share this state with each other in a persistent and durable way.  Also, it is very satisfying to have a server side interface where I can see all posts, and can manage, edit, delete and otherwise curate the content. And as well, it was an unrelated goal of mine to have a client side for the server – and so I saw this as killing two birds with one stone .  Now the currently running server instance is being used as the master instance for this app and it is logging all of the iphone activity.  Once the server was up it became easy to drive the client.

http://locative.makerlab.org

http://angel.makerlab.org

6) IPHONE JSON. Another example of where the iPhone tool-chain sucks is just how many lines of code it takes to talk to a server and parse some JSON. I was astounded. It took me a few hours – mostly because I couldn’t believe how much of a hassle it was and kept looking for simpler solutions. I ended up using TouchJSON which was “good enough”. With this I was finally able to issue a save event from the phone up to the server and then fetch back all the nearby markers. This is a key for a shared Augmented Reality because we want all requests to be mediated by a server first. Every object has to have a unique identifier and all objects have to be shared between all instances – just like a video game.  I’ve done this over and over ad nauseum for large commercial video games so I knew exactly what I wanted and once I fought past the annoying language and toolchain issues it pretty much worked first try. It does need improvements but those can come later on – I feel like it is important to exercise the entire flow before polishing.

7) TWITTER. I also fetched geo-located posts from twitter and added them as markers to the world as well. I found that this actually wasn’t that satisfying – I wanted something that showed more of an indication of real motion and time. This was something I had largely already written and enabling it in the pipeline was just a few minutes work.

8) WAZE. With my own server working I could now talk to third party engines and populate the client side. Waze (an employer of mine) graciously provided me with a public API method to access the anonymized list of drivers. Waze in general is a crowd sourced traffic navigation solution that is free for a variety of platforms such as iPhone and Android. I fetched this data and added it to the database and then I was able to deliver that to the client so that I could show contrails of driver behavior. This is still a work in progress but I’m hoping to publish this part as soon as I have permission to do so ( I am not sure I can release their data API URL yet ).

9) FUTURE. Other things I still need to do include a number of touch ups. The heading is not saved so all objects re-appear facing the same way. And I should deal with tilting the camera up and down. And I should let you post a name for your dinosaur via a dialog.  Perhaps the biggest remaining thing is to actually SEE through the camera lens and show the real world behind . Also I would like to move towards providing an Augmented Reality OS where I can actually interact with things – not merely look at them.

Code Tool chains

I must that the tools for developing 3d applications for the iPhone are quite poor by comparison to the kinds of tools I was used to having access to when I was doing commercial video games for Electronic Arts and other companies.

The kinds of issues that I was unimpressed with were

1) The iPhone. XCode is a great programming environment and the fact that it is free, and of such high quality clearly does help the community succeed. There are many iPhone developers because the barriers to entry are so low. But at the same time Objective C itself is a stilted and verbose language and the libraries and tools provided for the iPhone are fairly simple. For people coming from the Javascript or Ruby on Rails world they won’t be that impressed by how many lines of code it takes to do what would be considered a fairly simple operation such as opening an HTTP socket, reading some JSON and decoding it.  What is one or two lines of code in Javascript or Ruby is two or three pages of code in Objective C.  For example here are some comments from people in the field on this and related topics:

http://kosmaczewski.net/2008/03/26/playing-with-http-libraries/

http://cocoadev.com/forums/comments.php?DiscussionID=259

2) Blender is the status quo for free 3D authoring. It’s actually quite a poor application. It’s interface largely consists of memorizing hot-keys and there is a lot of hidden state in sub-dialogs. Perhaps the biggest weakness of Blender is that it doesn’t have an UNDO button – so it makes making mistakes very dangerous. I once spent an hour finally figuring out how to convert an object into a mesh and painting it and I accidentally deleted it and it was completely gone. Even things that claim to be working such as oh, converting an object into a mesh, often as not simply do not and do not provide any feedback as to why. It’s intriguingly difficult to find the name of an object or change it, or to see a list of objects, or import geometry or merge resource files. Features that are claimed such as importing KML seem to fail transparently. There are many old and new tutorials that are delightfully conflicting – and often their own Wiki is offline or slow. It really does require going slowly, taking the time to read through it and memorizing the cheat sheets:

http://wiki.blender.org/index.php/Doc:Manual/3D_interaction/Navigating

http://www.cs.auckland.ac.nz/~jli023/opengl/blender3dtutorial.htm

http://download.blender.org/documentation/oldsite/oldsite.blender3d.org/142_Blender%20tutorial%20Append.html

http://download.blender.org/documentation/oldsite/oldsite.blender3d.org/177_Blender%20tutorial%20Game%20Textures.html

http://www.keyxl.com/aaac91e/403/Blender-keyboard-shortcuts.htm

Blender came out of researching how to get objects into the OpenGL on the iPhone – at the end of the day it was the only choice pretty much.

http://www.blumtnwerx.com/blog/2009/03/blender-to-pod-for-oolong/

http://iphonedevelopment.blogspot.com/2009/07/improved-blender-export.html

http://www.everita.com/lightwave-collada-and-opengles-on-the-iphone

3) SIO2. I first rolled my own OpenGL framework but managing the scene and geometry loading was too much hassle so I switched to an open source framework. SIO2 provides many tutorials and examples and at least it seems to compile build and run and is reasonably fast. But it also has several limitations. The online documentation is infuriatingly empty – while they’re proud of having documentation for all classes and methods the documentation doesn’t actually say what things DO – it just describes the name of the function and nothing else. And many of the tutorials conflate several concepts together such as multiple instancing AND physics so that one is unsure of orthogonal independent features. Also the broken english through-out creates a significant learning curve. It is free but it needs a support community to come around it and to improve not the core engine but the support materials.  SIO2 works closely with blender and has a custom exporter – some of the tutorials on youtube show how to use it ( the text documentation magically assumes these kinds of things however ).  Although I hate watching video tutorials it is a pre-requisite to actually learning to work with the SIO2 pipeline. Overall now after a few days of bashing my head against it I can finally use it with reasonable confidence.

http://sio2interactive.com/SIO2_iPhone_3D_Game_Engine_Technology.html

http://sio2interactive.wikidot.com/sio2-tutorials-explained-resource-loading

4) Google Sketchup appears to be the best place to find models. I couldn’t actually even get Google Sketchup to install and run on my new MacBook Pro so I ended up using a friends computer to dig through the Google Sketchup Model Warehouse and import a model and then convert it to 3DS. I ended up having to use Google Sketchup PRO because I couldn’t find any other way to get models from KMZ into 3DS or some other format supported by Blender. The Blender Collada importer fails silently and the Blender KML importer fails silently. The only path was via 3DS.

http://sketchup.google.com/3dwarehouse/details?mid=f95321b35c2817bdaef005b7f8d10dde&prevstart=12

http://www.katsbits.com/htm/tutorials/sketchup_converting_import_kmz_blender.htm

5) Camera. One of my big goals which I have not hit yet is to see through the camera and show the real world. On the iPhone this is undocumented – but I do have some links which I will be researching next:

http://hkserv.ugent.be/boudewijn/blogs/?p=198

http://www.developerzone.com/links/search.html?query=domain%3Amorethantechnical.com

http://code.google.com/p/morethantechnical/

http://mobile-augmented-reality.blogspot.com/2009/11/iphone-camera-access-module-from.html

Technical Conclusions

My overall technical conclusions are that Android is probably going to be easier to develop applications for than the iPhone. I haven’t done any work there yet but the grass seems very green over there. The language choices of using Java seems like it would be a more pleasing, simple and straightforward grammar, and the access to lower level device data such as the raw camera frames seems like it would also help. Also since there are fewer restrictions in the Android online stores it seems easier to get an app out the door. As well it feels like there is more competition at the device level and that devices with better features are going to emerge more rapidly for the Android platform than for the iPhone. I think if I had tried to do this for the Android platform first that it would have been doable in 2 to 3 days instead of the 7 or 8 days that I have ended up putting into it so far.

My overall general conclusion is that AR is going to be even more hyped than we see now but that it will be hindered by slow movement in hardware for at least another few years.

I do think that we’re likely to see real changes in how we interact with the world. I think today the best video that shows where I feel it is going is this one ( that isn’t even aimed at this community per se ) :

AICP Southwest Sponsor Reel by Corgan Media Lab from Corgan Media Lab on Vimeo.

]]>
http://blog.makerlab.com/2009/11/augmentia-redux/feed/ 0
Augmentia http://blog.makerlab.com/2009/11/augmentia/ http://blog.makerlab.com/2009/11/augmentia/#comments Tue, 03 Nov 2009 18:48:08 +0000 http://blog.makerlab.com/?p=821 augmentia_by_doctorwat

(Augmentia - with permission from DoctorWat)

“You can find anything at the Samaritaine” is this department store’s slogan. Yes, anything and even a panoramic view of the all of Paris. All of Paris? Not quite. On the top floor of the main building a bluish ceramic panorama allows one, as they say, “to capture the city at a glance”. On a huge circular, slightly tilted table, engraved arrows point to Parisian landmarks drawn in perspective. Soon the attentive visitor is surprised: “But where’s the Pompidou Centre?”, “Where are the tree-covered hills that should be in the north-east?”, “What’s that skyscraper that’s not on the map?”. The ceramic panorama, put there in the 1930s by the Cognac-Jays, the founders of the department store, no longer corresponds to the stone and flesh landscape spread out before us. The legend no longer matches the pictures. Virtual Paris was detached from real Paris long ago. It’s time we updated our panoramas.”

The World is the Platform

Augmented Reality is going to make it possible for us to see through walls. It will remove some of the blindness that has crept up around our industrial landscape. But what is the “use” of this tool we’ve fashioned? And how will it even be implemented; how will many different app developers ever agree on what we see from a single window?

In a couple of weeks a bunch of us are going to get together to talk about this at ARDevCamp . But as a pre-amble to that I thought I’d share some of my own questions, thoughts and observations.

The hype has started to become real as William Hurley observes. Personally I blame Bruce Sterling but perhaps the iPhone 3GS and Android phones share some of the blame. This last weeks prime example should have been brought to us by companies like TomTom or Garmin given recent acquisitions. Instead (in what is clearly a longer term strategy) Google stopped licensing TeleAtlas in the USA and started provided their own higher quality interface and UI (and taking a bit of a stab at Apple at the same time not to mention the Open Street Maps community). The interface itself is shifting from a traditional top down cartographic orthodoxy to become more game-like; with street-view projections, heads-up-displays and zero-click interfaces. The hidden pressure underneath these moves may not be to just provide better maps but to provide a better higher fructose reality. A candy coated view that shows you just what you want just-in-time decorated with lots of local advertisements and other revenue catch basins. Cars and traffic reports are just the gateway.

In my mind this isn’t just hype but something relevant and important. Augmented Reality isn’t just an academic or even safe exercise. It connects in a very primal and critical way to who we are as humans. It’s not just an avatar in Second Life or a profile on a OKCupid – it is us. It puts own embodiment at risk. And whomsoever can mitigate that risk while providing reward will probably do well. I believe that organizations such as Apple and Google see this and are pursuing not merely real-time, or hyper-local or crowd-sourced apps but ownership of the “view”. They want to own the foundation of the single consistent seamless way of presenting an understanding of the world. And as such it is about to become extremely competitive.  Everybody wants a part of the lens of reality, the zero-click base layer beneath the beneath. As Gene Becker puts it “The World is the Platform”. And an ecosystem is starting to emerge.

Personally I’m trying to approach an understanding with praxis; balancing between time reading and time making. On the making side I’ve been writing an Augmented Reality app for the iPhone. For me that’s already a unique exercise. It’s the first time I’ve written code and then had to actually go outside into the real world to test it. On the thinking side, and coming from an environmental interest, and from a critical arts and technology perspective I’ve also been fascinated by how we understand and use Augmented Reality.

Collision of Forces

Like many new technologies Augmented Reality magnifies tensions between things that were normally separate.

In a sense it is the same dream that the social cartography community has had. This is the community that coalesced around Open Street Maps, Plazes, Where 2.0 and the idea of geo-tagging as a whole. This was a vision of a crowd-sourced bottom up community driven and community owned understanding of the world. It is a vision that failed in some ways. Yes we have nice free maps but we never did get to the point of being able to see our friends, or the contrails of where our friends had been, or really where the best nearby place to have a nap was. But now the idea is returning more forcefully and with more determination than ever.

It is also about an actionable Internet. There is a community that is rebelling against the morbidity of indoor culture and a largely passive media consumption centric lived experience. One that wants to decorate the world with verbs and actions – that wants to put knobs and levers on everything – or at least make those knobs and levers more visible. Diann Eisnor talks about Transactional Cartography – an idea of maps that are not passive – that don’t just show you where you can solve a problem – but that hear your request for help and call you back with solutions. Just imagine the kinds of trust and brokering negotiation infrastructure that this inevitable end game implies.

It is also about an ideal of noise filtering as a pure problem. There’s been a long and unsolved problem of building working trust networks on the web as a whole. Even aside from spam there are acres of rotting bits out there that will completely drown out any new view unless they are filtered for. Many social graph projects have failed to help filter the deluge of information that we are inundated with every day. When you can’t see the forest or the trees then this becomes a much higher priority to resolve.

It dredges up an amusingly disparate rag-tag collection of development communities who have been safely able to ignore each other. Suddenly game developers are arguing with GIS experts and having to unify their very different ways of describing mirror worlds. Self-styled Augmented Reality Consortiums are emerging with the proposition to define the next generation notational grammars by which we will share our views of reality.

It brings the ubiquitous computing and ambient sensor network people to the table. These are folks who had safely been hiding out in academia for the last decade doing exotic, beautiful and yet largely ignored projects .

It creates a huge pressure and demand for interaction designers to actually make sense of all this and make interfaces that are usable.

It draws a pointed stare towards the act of siloing and building moats around data. When your FourSquare cannot see your Twitter and when your Layar view can’t show the gigantic T-Rex stomping towards you … well people just aren’t going to put up with that anymore. What is needed is a kind of next generation Firefox or foundation technology that underpins and unifies these radically disparate realities.

It is going to take the idea of crowd-sourcing to a wildly energetic new level above where it is now. When your body is on the line the idea of real-time tactical awareness suddenly becomes much more important to everybody. When the SFPD can volunteer that they’re going to put a radar gun at a location, or when a driver can post about a car accident to the cars behind him or her – you start to involve a real time understanding that affects your quality of life in an visceral way. It’s almost the beginning of a group organism. Something that goes beyond merely flocking type of behaviors and becomes more like a shared nervous system. It’s an evolution of us as a species – and probably just in time as well given the kinds of environmental crisis we are facing.

It takes the Apple ideals of interface to a new level. Instead of one click there are zero clicks; the interface becomes effortless. As Amber Case puts it interfaces move from being heavy and solid with big heavy buttons and knobs and rotary dials to becoming liquid and effortless like the dynamic UI of the iPhone to becoming like air itself. They become part of the background, ambient and everywhere, we breathe them and can see through them, the virtual pressure of these interfaces becomes like an information wind steering us around invisibly like toy boats on a lake.

It will connect us to the environment because everything actually is connected to the environment – we just manage to ignore this. Our natural environment underpins everything around us but we largely ignore it. There’s a feeling in the movement that things are constantly getting worse. That we’re losing more of Eden every day. We hear in the media about plastic oceans, carbon dioxide and the like. Derrick Jensen says “what would it take to live in a world where every year there were more salmon, and every year there were more birds overhead, and less concrete and more trees?” Paul Hawkens talks about an idea of thousands of local organizations developing a local understanding of their region and each working in parallel over local issues. When people can see environmental issues around them, and connect those issues more simply to related economic issues then it will vitalize action.

It will do interesting things to national boundaries. When you can look through walls and see other kids who are exactly the same as you – clearly that will have some kind of impact. Either to humanize us or to make us carry an even greater burden of cognitive dissonance.

It even brings out that eternal question of what it means to be human. We’re so willingly embracing technology today it almost feels like a planet wide mania. Consider how the One Laptop Per Child is challenged in terms of is it the best and cheapest technology device for kids but rarely is there a question of if technology at all is the right thing. We give some kids augmentia while other kids pry precious metals out old desktops while coughing out toxic smoke from nearby jury rigged smelter operations.

As Sheldon Renan posits in his ‘theory of netness’ a sufficiently dense network exhibits an emergent behavior. A virtuous field is created that affects not only the participants in the network but everything around it, even things not directly connected to it. By way of allegory in the United States we used to back our currency with gold. At some point we left that backing because the illiquidity was a hindrance to velocity. Local area information is about to get a similar speed up and disconnection from its argumentative grounding. You won’t have to visit city records to see the hidden history of the homes around you or the supply chain behind a package of smarties. AR is in some ways like seeing the speculative sum of the Noosphere. Privileged information may become cheaper. Inflationary economies may take hold. But by making hidden things visible, and visible things cheap, it will make other things possible that we don’t entirely realize yet.

Historical Perspective

In 1997 I co-founded Virtual Games Inc. We were a specialized 12 person venture funded games co focusing on real-time immersive many-participant shared experiences. You could put on a VR helmet and run around in our game worlds and interact with other players ( usually by shooting them unfortunately ).

Back then the relatively moderate performance of 3d rendering hardware made it difficult to keep up with the rapid head movements of the players. The lag between moving your head and seeing the 3d display repainted could make you nauseous. Today the average video game machine such as the WII, XBox or Playstation II can paint around 100,000 lit shaded polygons at 60 frames a second but back then home computers were much like the mobile devices today; capable of only very limited 3d performance.

The biggest challenge we faced wasn’t hardware however. Rather it was simply knowing where to start; how to define the topic as a whole. We had very few examples. Issues such as User Interface controls that could be used while moving, having a Heads Up Display, having a radar view, or decorating the VR world with visibly striking markers – these were all fairly novel ideas. We didn’t have a design grammar for representing the objects, their relationships and how they behaved.

Today many of the same issues are occurring again with Augmented Reality. The synchronization and registration between the movement of the real world and the digital overlay can feel like being on a ship at sea. Presenting complex many polygon animated geometries that interact with the users is still a challenge – especially on mobile devices where the camera is fairly dumb and the computational power limited. Making a publishable data representation of an avatar or interactive digital agent is in and of itself a significant challenge. There are fundamentally new ways of interacting that still haven’t been very well defined. The Augmented Reality Operating System has yet to be invented.

Now as a result there are fervent discussions about how to describe, publish, share and run an Augmented Reality world.  People are trying to design an ARML ( Augmented Reality Markup Language ) much like occurred years ago over VRML ( Virtual Reality Markup Language ). But the whole space still lacks the cognitive short-hand and the usability expertise that characterizes web development today.

AROS

“For instance, do you see this chunk of land, washed on one side by the ocean? Look, it’s filled with fire. A war has started there. If you look closer you’ll see the details. Margarita leaned towards the globe and saw the little square of land spread out, get painted in many colours, and turn as it were into a relief map. And then she saw the little ribbon of a river, and some village near it. A little house the side of a pea grew and became the size of a matchbox. Suddenly and noiselessly the roof of this house collapsed, so that nothing was left of the little two-storey box except a small heap with black smoke pouring from it. Bringing her eye still closer, Margarita made out a small female figure lying on the ground, and next to her, in a pool of blood, a little child with outstretched arms. “That’s it,” Woland said, smiling, “he had no time to sin. Abaddon’s work is impeccable.”

Building the technology for a next generation OS is going to be challenging.

There will need to be some kind of way of publishing AR objects onto the Internet. This description will have to describe what an AR object would like to be presented as. Its geometry as described by a series of polygons or mathematical surfaces, texture, appearance, lighting and animation. Often appearance is tied to underlying functionality and a description of the behavior of the object needs to be shipped as well. Some of this behavior is gratuitous; eye-candy for the viewer, and some is utilitarian, actual work that the object may do for you. The clear legacy for this kind of description comes from the world of video games.

Unlike the traditional web probably there will be one view – not many separate web-pages. Everybody’s stuff will all pour together into one big soup. Therefore there will need to be a way to throttle 3d objects that are presented to you; limiting the size, duration and visual effects associated with those objects so that one persons objects do not drown out another persons. Objects from different people will have to interact gracefully with the real world and with each other.

There will be an ownership battle over who owns ordinary images. Augmented Reality views may be connected to real world images around us. An image of an album cover could show the bands website, or it could show Amazon.com – depending on who ends up winning this battle. An image of you could show your home-page or a site making fun of you. Eventually a kind of Image Registry will emerge where images are connected to some kind of meta-data.  An AR View would talk to this database.

There will be user interface interaction issues. What will be the conventions for hand-swipes, grabs, drags, pulls and other operations to manipulate objects in our field of view. We’re going to evolve a set of gestures that don’t conflict with gestures we use around other humans but that are unambiguous.

There will be a messaging system. It’s pretty clear that most signage, sirens, alerts and social conventions will be virtualized. You’ll probably be able to elevate your car to being an ambulance in certain conditions and have everybody clear the road ahead of you for example.  This kind of transaction will require an agreement on protocols at least – aside from privileges, permissions, and payment systems.

There will probably be huge incentives to have trust well defined. Since your actual body is usually involved in an augmented reality – you’re likely to be more sensitive about full disclosure. Trust is usually accomplished by a whitelist of friends who are allowed to see you or contact you – and perhaps one or two degrees of separation may be allowed as well.

New Senses

Of course we can imagine that we’ll move past these challenges. And then it becomes like any human prosthetic; integrated with our faculties, shifting who we are, and becoming invisible. Modern video games have a well framed design grammar that is taken for granted – the experience of being in a VR world is completely natural. Mobility, teleporting, just-in-time information – all completely normal. We can navigate a VR world with about the same ease that we can trace our finger along a map or browse the chapters of a book. And like maps or books if it is convenient and helpful then it becomes necessary.

Today I am sitting in the park between the Metreon and the San Francisco Museum of Modern Art. I’m currently surrounded by thousands of “agents”, ranging from birds to pedestrians to street-signs to the grass itself. Clearly we are fit for this world we live in. Plants in general are color coded in such a way that their coloration has critical meaning for us. There is a well understood inter-species dialogue between ourselves and other kinds of agents at many levels. The pace of the world runs at about the pace of our ability to keep up with it. Our world is highly interactional – a total tactile and sensory immersion if we permit it. Our whole body is ventured and at risk. The world affects and defines us by the compromises we make; we put substantial cognition into avoiding harm. It is not about arbitrary irreverent static images floating around in our field of view like a detached retina. We are a persistent but porous boundary between an inner state and an outer state. Our embodiment is affected by the powers and needs we have.

Augmented Reality is (I imagine) more of a new kind of power. It isn’t quite like our own memory or quite like the counsel of friends. It stands in its own right. It is not simply “memory” – it isn’t just a mnemonic that helps bring understanding closer to the surface of consciousness. A view instrumented with extra hints and facts is of course not entirely novel. Clearly we are surrounded by our own memories, signage, advertising, radio, friends voices and an already rich complicated teeming natural landscape loaded with signifiers and cues. But it is another bridge between personal lived experience and the experience of others. It seems to lower costs of knowing, and it seems to provide stronger subjective filters. A key aspect is that it seems to be faster. It’s as if we are evolving in a Lamarckian fashion to deal with a new kind of world.

It is hard to imagine what having a new sense is like. Recently I was invited by Mike Liebhold at the IFTF to hear Quinn Norton talk about having had magnetic implants in her fingers. She is the writer for Wired Magazine who interviewed Todd Huffman a few years back on the same topic and had the procedure done to herself. By brushing her fingers over a wall she could literally feel the magnetic field lines where the electrical wires ran underneath the surface. Her mind integrated this as a new sense; not merely a tugging on her fingers but a kind of novel sensory field awareness.  Quinn also spoke about wearing a compass cuff; a small ankle bracelet that would buzz on the north facing side. Over time it gave her an awareness of which direction was true north. It wasn’t just a buzzing feeling in her leg, but a feeling for her orientation with respect to the world. This kind of sensory awareness may be like what a homing pigeon feels intuitively. Choices we make may be quietly guided by an understanding we have.

Boxes

Who have persuaded man that this admirable moving of heavens vaults, that the eternal light of these lampes so fiercely rowling over his head, that the horror-moving and continuall motion of this infinite vaste ocean were established, and contine so many ages for his commoditie and service? Is it possible to imagine so ridiculous as this miserable and wretched creature, which is not so much as master of himselfe, exposed and subject to offences of all things, and yet dareth call himself Master and Emperor.

Dirt Architecture has leaned in the direction of making our world simpler, safer and dumber. It seems to largely have been about the imposition of barriers, walls and structures to reduce the complexity of the world. This is prevalent today. Perhaps the primary legacy of the Industrial age is the fence.

Many of us still live sheltered box lives. In the morning you enter the small box that is your car and it safely navigates you to your office. During this journey you are protected from the buffeting winds, from people, from noise and from most other distractions. Once at the office you sit down in your cubicle, the walls safely blinkering away distractions as you myopically gaze into the box of your computer screen. Even the screen itself consists of very clearly delineated boxes. There are buttons that say “go” and buttons that say “cancel”. There is no rain, no sun, no noise. After the days work ends you get back in your car and you drive home. When you arrive at home you close the door behind you and relax – ignoring the outside world held at arms length outside of your domain.

There is a sense of pleasure in this artificial simplicity. A sense of closure, understanding and a lack of fear about things being hidden. There is also an undue sense of speed at our ability to race through these spaces very quickly.

This pattern is similar to that of working by yourself versus working with others. You gain privacy, concentration, control and velocity by doing it yourself, but you lose an ability to crowd-source problems and to avoid repeating work and energy that others have already put in. By expending more energy on being social you save energy on wasted effort.

This extends to the way we shop at Whole Foods, Costco, Walmart, Ikea and other such big box stores. Certainly part of the reason we don’t use local resources as much as we could is that we simply can’t see them. We don’t know that we can just pick an apple instead of buying one. We don’t know that a certain garage sale has what we need or that there’s an opportunity to volunteer just around the corner.

If we interact with spaces primarily as a series of disjoint divisions then we tend to think our actions on the world can be contained without side-effects. In any busy city you can see the store owners and proprietors manicure the space directly in front of their building. Planting plants, brushing the pavement, creating a sense of mood and ambiance around their particular restaurant. And that obligation stops immediately at the margins of their property line. Of course this just pushes negative patterns to the edge where pressure builds up more strongly.

Our aesthetic leads us to try to whitewash reality and yet it pokes through. An urban landscape becomes clotted with thrown away garbage, sidewalks blackened with bubble gum. Paint peels, weeds crack the pavement. We see sometimes vagrants, beggars and the dispossessed raging against the world, noisy, bothersome; frightening even. We see their helpless entanglement and inability to be indifferent as a kind of betrayal of Utopia.

Simplicity, linear surfaces, boxes, walls. These patterns fail because they hide but do not eliminate side-effects. In fact they magnify them. It is the lack of synthesis between spaces, the lack of free movement between them that makes pressures build up. If you can’t understand that you could share a ride with a new friend to work, or that kids are constantly vandalizing your street because they used to exhaust themselves instead in a wilder more abandoned overgrown forest, then you tend to work against opportunities, you end up spending more energy to get less.

This is so unlike a dirty natural entangled world where you have little say in how the world is phrased. Where one brushes through spider webs and thorns stick to you and you have to walk all the miles to a hopeful uncertain destination. You get wet and dirty and hungry and tired and rained on and slapped silly by nature if you make a dumb mistake. You have to balance many forces in opposition and if you tug on one thing you find it connected to everything else in the universe. In nature one is constantly leveraging the landscape itself, working very closely with what it affords and simply steering those resources slightly in your benefit rather than asserting them so strongly. And it is there that we always seemed happiest.

Augmented Reality seems to at least offer the possibility that we can punch some holes in the boxes. It seems to offer a bridge between structure and chaos rather than just structure.  It is fundamentally different to see that something in a geographical proximity to you is actionable than to see it in a list view in Craiglist or read about it in a newspaper. It becomes a physical act – you can walk towards it, you can judge if you should participate.

Use

AR is a precise assault on dirt architecture. It is a response to design – not by changing the world but by changing us. It is as if we’ve become fatigued with the attempts to refashion perspective with dirt and are instead just drawing lines in the air. How will we use it? And by use I mean use in the same way that we wear a garment or use an art object – the value we derive from it individually and culturally.

The First Union Methodist Church of Palo Alto on Webster street is designed to evoke a certain emotion. It has a Gothic style with many small windows arranged to a peak. To me these tiny windows seem to imply souls, perhaps ascending to heaven. That the windows are small also seems to imply a certain kind of suffering in life and a certain role of humility. The architect who designed this invoked a visual language that subconsciously refers to historical references and understanding. Carlton Arthur Steiner, the designer, may indeed not have been a fully rational actor; much in the way that we casually gesture with our hands and expect others to understand those gestures even though we don’t fully know them ourselves as rational acts.

This church is a fairly objective object in our shared reality. We may bring our own prejudices, history and understanding to our perception but it exists as a series of reinforcing statements by an amalgamation of the people around it. To avoid a Wittgenstein-like knot: I use my perception of said church a different way than another person but I am not using something else entirely; there is some portion of it shared between different views.

Counterpoint this with the augmented reality case where the church may not even be there, or may be some other completely arbitrary and alien cartoon artifact – something so subjective to each user that agreement is radically impossible.

We’ve always draped our landscapes with our opinions. We downscore certain things, upscore other things and in this way exhibit a kind of prejudice. We’re afraid of and offended by people who are down and out, we embrace a certain definition of nature, and a certain definition of beauty. We think certain kinds of architecture, space and geometry is beautiful. There are a set of culture aesthetics that bias us to value certain kinds of artifacts, shelters and structures over others. We read between the lines in many cases, seeing the rules that guided outcomes, seeing policy and choice as reflected in the geometry of our world and nod approvingly or disapprovingly.

Most of us are not architects and don’t have permission to rewrite our landscapes anyway. We’ve had to comfort ourselves with criticism in text, image, placard or graffiti to communicate our point of view. Often it was at a degree of remove – not so closely conflated and overlaid with the view as augmented reality affords. Even graffiti is somewhat transitory and superficial; it is not a deep rewriting of structure ( at least not yet ).

In an augmented world these factors all move around. Your critical statement may be directly attached to the subject in question; not at a remove. Your statement is explicit, it can be published to other people, it isn’t just in your head. But at the same time your statement is increasingly subjective. It loses some of the value of an embodied artifact.

In an Augmented Reality we can erase buildings that offend us and we can paint golden halo’s around people that we like. We can prejudice our contemporaries and fuel a kind of hyper tribalism if we wish. But at the same time our power is diminished unless we can get a large portion of the mainstream to agree with our view.

Consensus

AR views will make our prejudices more visible and more formal. But they will also make them more subjective. Different people will subscribe to different views and build up quite a bit of bias before they’re forced to reconcile that with other people.

It may very well be that the role of consensus builder, or at least the role of holding a consensus space where issues of consensual reality can be debated, may become most important. I imagine that the role of a bartender for example, a neutral stakeholder who bridges other people together by offering a shared public space, might become quite important.

Let’s imagine that three people walk into a bar:

The first person, let’s call her Mary, a liberal environmentalist, has an augmented reality view that shows the carbon footprint of the people and objects around her. She can also see where the rivers used to run through the urban landscape, she can see if food is locally sourced and if purchasing power goes back into her community. She can see where super-fund sites are and where poverty levels are higher.

The second person, let’s call him a Derek, an artist, has an augmented reality view that redecorates the landscape around him with a kind of graffiti. All surfaces are covered with cartoon like creatures voicing criticism, comments, banal humor and art. He automatically has a critical perspective that lets him better understand others assumptions. He can see the contrails of his friends passage, the tenuous connections between people, and the location of upcoming art events in the area.

The third person, let’s call her Sarah, has a neo-american point of view and say is deputized as a police officer. She can see the historical pattern of crime in the area, she can see the traffic congestion, parking zones, gps speed-traps and can raise her space to emergency vehicle status if she needs it, she can see the contrails of important people in the neighborhood and can turn streets off and on.

The bartender serves them all a round of beers on the house and they sit down to talk about and share their differences.

Each of them is going to see their beer, and each other a radically different light based on their powers. For Mary the beer may appear especially palatable due to being locally sourced. For Derek the beer may have an attached satire which plays out about the human condition. For Sarah the beer may be seen with respect to late night noise ordinance violations surrounding the pub. This is on top of any personal memory that they have.

They get to talking about the beer, how regulated it should be, how it should taste and the like. A small typical bar conversation, but prejudiced by fairly strongly colored and enhanced points of view. Each participant thinks they are picking facts but they’re in fact picking opinions. Over time each one has subscribed to a set of prejudices that fundamentally altered what they now see. It alters how quickly they reach for the drink, it alters if they even enjoy it.

Over the issue of regulation Sarah might say that the sale of alcohol should be restricted. Derek might say that the alcohol should be served frozen so that it takes longer to consume. Mary might argue against regulation at all.

Each persons views are accumulated views. They are accumulated out of networks of people with like minds. Some networks are based on friendship, similar sentiment and trust. Other networks are constructed out of hierarchical chains of command. Each of these individuals reflects not just themselves but is a facet of a larger community and a larger set of views.

What comes to the table is not Mary, Derek and Sarah but Mary’s tribe, Derek’s tribe and Sarah’s tribe.

And the resultant consensus conflict becomes a classic case of the same kind of pathology that occurs when anthropologists try to understand a new culture. Each person is burdened by a deeply framed cultural lens that makes it difficult to really see things as they are. There is a tendency for all of us to divide the world into categories or into prototypical objects, and to then classify what we see as an example of some kind of object. We build mental machinery to deal with objects – we know how to deal with dogs or cats or a car – and we can mistakenly treat something as dog-like or cat-like when it in fact is dangerously not quite so. We cannot always give all things equal weight all the time, and in prioritizing, categorizing and scoring we necessarily create prejudice.

The redeeming difference here is that each of these participants can choose to trade views. Saran can put on Derek’s view, and Derek can put on Mary’s view and Mary can put on Sarah’s view. They can now see the world as scored from the other person’s point of view.

We find that Sarah has a personal financial benefit to seeing the world in her perspective. Her point of view is necessarily beneficial to continuing to earn a living. Derek perhaps also has a similar dependency. His point of view is necessarily driven by a need to continue maintaining a street credibility with his artist peers. Mary’s point of view is driven by and self-reinforced by a caution and concern for her well being. Each of these points of views is an embodiment of needs.

There’s both a risk and a promise that Augmented Reality will magnify prejudice but may also help us more clearly see each others prejudices. More to the point we’ll be able to hopefully trace back down to basic needs that lead to specific prejudicial postures. We can unwind the stack and get down to embodiments – perhaps we can tease apart our deep differences or at least respect them.

Links

http://swindlemagazine.com/issue08/banksy/

http://www.walletpop.com/blog/2009/10/09/want-better-service-just-complain-on-twitter/

http://en.wikipedia.org/wiki/Puddle_Thinking

http://radar.oreilly.com/2009/10/augmented-reality-apps.html

http://crisismapping.ning.com/profiles/blogs/crisis-mapping-brings-xray

http://www.informationisbeautiful.net/leftvright_world.html

http://www.nearfield.org/2009/09/nearness

http://www.cityofsound.com/blog/2009/10/sensing-the-immaterial-city.html

http://www.urbeingrecorded.com/news/2009/09/22/rss-augmented-reality-blog-feeds/

https://xd.adobe.com/#/videos/video/436

http://www.readwriteweb.com/archives/ex-googler_brizzly_creator_on_real-time_web_filtra.php

http://www.bruno-latour.fr/virtual/index.html#

http://mashable.com/2009/10/18/wolfram-alpha-iphone-app/

http://www.techcrunch.com/2009/10/20/wowd-takes-a-stab-at-realtime-search-with-a-peer-to-peer-approach/

http://goblinxna.codeplex.com/

look at a variety of iphone 3d engines such as the ones used during GOSH

http://www.abiresearch.com/research/1004454-Augmented+Reality

http://www.timpeter.com/blog/2009/10/06/how-important-is-local-search-heres-a-hint-extremely/

http://virtual.vtt.fi/virtual/proj2/multimedia/alvar.html

http://pointandfind.nokia.com/

http://www.augmentedenvironments.org/blair/2009/09/23/has-ar-taken-off/#more-104

http://www.readwriteweb.com/archives/robotvision_a_bing-powered_iphone_augmented_realit.php

http://www.businessweek.com/technology/content/nov2009/tc2009112_353477_page_2.htm


]]>
http://blog.makerlab.com/2009/11/augmentia/feed/ 4
A turnkey deployment of Ushahidi Swift http://blog.makerlab.com/2009/10/a-turnkey-deployment-of-ushahidi-swift/ http://blog.makerlab.com/2009/10/a-turnkey-deployment-of-ushahidi-swift/#comments Fri, 02 Oct 2009 22:58:43 +0000 http://blog.makerlab.com/?p=810 Swift is intended to be used in rapidly evolving crisis situations such as a tsunami or a civic disaster. Over the last year as the Swift app has been being developed there have been a number of events that fit the profile. The Mumbai terrorist incident is a good example. Each crisis has helped us improve the engine, hopefully soon it will be able to provide real value to help out in crisis situations. In this post we review how to build an instance of Swift.

Deploying Swift quickly and easily however is a hassle. Chris Blow decided that we should build an EC2 instance that anybody could clone. The hope is that this will help speed crisis intervention and help provide better information to first responders in the field more quickly.

This post is technical, we were just focused on how to build a clean version of Swift. Here’s the set of incantations that we went through to do this. After this there will be a packaging and cloning step, and of course running Swift agents to keep watching crowd-sourced traffic. Swift itself is continuing to evolve of course.

Here’s the incantation if you want to do this from scratch:

# Spawn an EC2 Instance
# created the instances based on the jaunty jackalpe AMI by Eric Hammond

# Fix the EC2 instance to allow remote ssh logins so that your friends can help you

sudo vi /etc/ssh/sshd
PasswordAuthentication yes
restart the sshd
sudo kill -HUP 1080

# Add your friends to sudoers so then can actually do useful stuff
sudo vi /etc/sudoers

so it looks like this or thereabouts:

# User privilege specification
root    ALL=(ALL) ALL
anselm  ALL=(ALL) ALL
chris   ALL=(ALL) ALL

# Go ahead and make sure that apt is updated and the like
# https://help.ubuntu.com/8.04/serverguide/C/apt-get.html
# We’re going to battle our way through installing rails
# We want a TON of core ubuntu packages that are totally unrelated to rails as well
# http://www.hackido.com/2009/04/install-ruby-rails-on-ubuntu-904-jaunty.html
# mysql will ask for a password – i just set it to the word ‘password’

apt-get update
apt-get dist-upgrade
apt-get install build-essential
apt-get install ruby ri rdoc mysql-server libmysql-ruby ruby1.8-dev irb1.8 libdbd-mysql-perl libdbi-perl libmysql-ruby1.8 libmysqlclient15off libnet-daemon-perl libplrpc-perl libreadline-ruby1.8 libruby1.8 mysql-client-5.0 mysql-common mysql-server-5.0 rdoc1.8 ri1.8 ruby1.8 irb libopenssl-ruby libopenssl-ruby1.8 libhtml-template-perl mysql-server-core-5.0

apt-get install subversion git git-core
apt-get install apache2 apache2-prefork-dev libapr1-dev libaprutil1-dev
apt-get install libxml2 libxml2-dev libxslt libxslt-dev

# install ruby gems…

wget http://rubyforge.org/frs/download.php/60718/rubygems-1.3.5.tgz
tar zxvf rubygems-1.3.5.tgz
cd rubygems-1.3.5
ruby setup.rb

ln -s /usr/bin/gem1.8 /usr/bin/gem
ln -s /usr/bin/ruby1.8 /usr/bin/ruby
ln -s /usr/bin/rdoc1.8 /usr/bin/rdoc
ln -s /usr/bin/ri1.8 /usr/bin/ri
ln -s /usr/bin/irb1.8 /usr/bin/irb
gem install rails –no-rdoc –no-ri

gem install passenger
passenger-install-apache2-module

# let’s not bother installing passenger-install-ningx-module
# on ubuntu apache2 and ningx and the like seem to like to put the websites in /var/www
# lets fetch the swift app and put it there

cd /var/www
git clone git://github.com/unthinkingly/swiftriver.git

# and lets make the db…  it is mysql for now … later hopefully postgresql or builtin

cd swiftriver
cp config/database.yml.sample config/database.yml

mysql -p # when i installed mysql i set the password to ‘password’ so use that when it asks
mysql> create database swift;
mysql> flush privileges;
mysql> exit;

# now the database is up – this is needed to do the rake step below
# now lets configure the app by letting it pull in a million billion gems and the like

gem sources -a http://gems.github.com
gem install mongrel fastthread json GeoRuby haml mislav-will_paginate daemons
gem install ruby-debug
gem install gchart
rake gems:install

vi config/environment.rb  # > correct the dependency versions such as for haml to be the same as system!

# let us try migrate the actual database in!!!

rake db:migrate

# lets run tests

rake test

# Finally you can even RUN IT! LOOK AT IT! HAVE AWE!

script/server start

apt-get install lynx
lynx http://localhost:3008

# I guess you can add passenger phusion too
# http://www.modrails.com/documentation/Users%20guide%20Apache.html

vi /etc/apache2/sites/default

I added this:

Listen *:80
NameVirtualHosts *:80
LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-2.2.5/ext/apache2/mod_passenger.so
PassengerRuby /usr/bin/ruby
PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-2.2.5
PassengerMaxPoolSize 10
<VirtualHost *:80>
ServerName swift.org
DocumentRoot /var/www/swiftriver/public
RailsBaseURI /rails
RailsEnv development
</VirtualHost>

# notice my magic about about being in development mode in rails for passenger …
# to restart this under apache you can restart apache or magically kick passenger to restart itself:

/etc/init.d/apache2 stop
/etc/init.d/apache2 start
cd /var/www/swiftriver
mkdir tmp
touch tmp/restart.txt

# also there is some detail with mysql performance that may be worth looking at see:
# http://www.hackido.com/2009/04/install-ruby-rails-on-ubuntu-904-jaunty.html

]]>
http://blog.makerlab.com/2009/10/a-turnkey-deployment-of-ushahidi-swift/feed/ 1
Programming, Graph Theory and a Request For Help. http://blog.makerlab.com/2009/08/programming-graph-theory-and-a-request-for-help/ http://blog.makerlab.com/2009/08/programming-graph-theory-and-a-request-for-help/#comments Fri, 07 Aug 2009 19:51:06 +0000 http://blog.makerlab.com/?p=790 Comments on a work in progress

I’m working at the Banff Centre right now – taking existing work that I have already done and generalizing it to build a bare bones locative application server and client. The idea is to let people use a mobile device to post photographs or text or audio or movies up to a shared web server and to then see those posts on a shared map. This is similar to work I have already done and it is in fact just a slight repurposing of existing work from a variety of open source resources. If it goes well then it will be a resource for locative media artists to use to share their locative media projects.

However today I seem to have run into a problem – a similar kind of problem that I see over and over. And I thought I would elaborate on this – and hopefully get some answers.

What I’m stuck on is finding an easy way to let the user interact with the data without having to re-invent the wheel. Of course the whole dataset can be incredibly complex – and can end up being something that is difficult to interact with – so many people have worked at finding ways to visualize data – such as for example SkyRails:

What data am I trying to represent?

I’ll back up a bit and just talk about programming first – a kind of mini-tutorial on programming. In computer programming there at least 3 ways to represent data.  Let’s consider an ordinary web mapping application purely as an example here – and what data it might contain:

1) In a user view or UX view the user is presented with an easy to understand visual representation of the data. For example a user might see a “map” object that contains “marker” objects. And the user can interact with this visual representation – to add or remove markers for example.

2) A data only view may represent the “maps” and “markers” as just tables in a database column. This is a very dry and mechanical view that does not make it as clear that the objects have a relationship to each other ( such as that the map may own the markers or that the markers are near each other ).

3) A graph based view. Often in my head when I am working with an application I tend to represent it as a graph of nodes floating in space. These nodes have connections to each other and it is those connections that are of primary importance. In my case I would represent a “map” as a parent node, and then draw the “markers” as children nodes.

In fact it is the third view that is the most accurate I feel. What computer programming really “is” is just a form of graph manipulation. Once you can put aside the visual representation and really see what is going on – what you see is that it largely consists of many many subtle and carefully defined relationships between things. This is why many programming tutorials are so confusing; because they focus on syntax and grammar rather than the intuitive understanding.

When we’re looking at a graph based view what becomes clear is that it is the relationships between things, not the things, that is most important. This maxim might be true of everything.

What kinds of relationships exist between data?

So what I really want to look at is the kinds of relationships that exist between objects and then look at ways to try to represent them.  There are at least three kinds of relationships that a programmer often is trying to define:

1) The relationships of links between “Instances”, such as the relationship between you and your facebook friends. These can be referred to as “Instance Relationships”. An “object” in this sense is something that exists as an instance, not just that it is a “Person” or a “Place” but that it is a specific person, a specific place – an instance of a kind.  For example :

2) The “structural” relationships of properties, attributes or qualities. For example in FaceBook a Person and a Group are both a kind of object with similar properties. Often it makes sense to say that they are both a kind of “friendable” object so that you can write a piece of software that can look at friendable objects and show which ones are friends with which one – instead of having to write similar but slightly separate code for objects that are friends of persons and for objects that are friends of groups. In a kind of Platonic model there is an idea of an abstract “kind” and then other kinds are variations of that kind. The relationships between “kinds” not “instances” is crucial to understand.

3) The event messaging relationships, such as in a video game where a collision event may be sent from one object to another so that the other object knows it should do something appropriate ( such as rebound or make an appropriate sound ).

The challenge I face here is the following:

1) I would like to let users be able to make maps and markers (and other instance objects)… this is easy obviously.

2) I would like to be able to let users say that the maps/markers they make are “private” so that nobody else can see them, or “protected” so that only friends can see them or “public” so than anybody can see them…. this is also not hard.

3) I would like to allow the user to delete maps or markers that they made or that are marked public for deletion – or that they have administrative privileges over…. obviously a requirement.

4) I would like to let the users add or remove markers from maps if those maps are open, or if those users have administrative permissions over those maps…. and obviously a requirement as well.

5) Where it starts to get challenging is that I would like to let the user decorate a map or marker with arbitrary attributes. For example I would like to be able to let you set the “barometric pressure” on a marker if you wished.

6) In a perfect world I would like the user to be able to say that a given marker is a “prototype” kind of marker and that other kinds of markers can be instanced with the special attributes inherited from the prototype. I’m uncertain if it makes sense to allow an entire prototype hierarchy. This would let me have a map where all markers added to the map automatically had a “barometric pressure” attribute that could then be filled in.

7) In an even more perfect world I would like to be able to say that certain attributes are mandatory, and that others are optional.

Are there any off the shelf solutions?

Now, the challenge is really that this kind of issue occurs over and over. We’ve seen systems like RDF and the FreeBase project as an attempt to address this. What I really want is a combination of two things:

1) A database model that can bind to Rails that represents this kind of data.

2) And a visual editing tool that lets an ordinary user easily drag and drop objects together, to create and break relationships between objects, and to edit the attributes of objects.  Something I can simplify or tailor the view of.

On the second point – why don’t tools like this exist? If most if not all programming problems are a variation of this then one would imagine that there would be many graph editing tools out there that let users wander a garden full of data, pruning and trimming it as they go. It seems like even lesser versions. I would like something that could be put into a browser that provided a succinct view of the tree of objects and allowed easy manipulation of that tree directly. Tools like activescaffold for Rails do provide a complete database view but I want a view that reflects the relationships between objects in a tree – emphasizing the parents and children and emphasizing easy editing of attributes.

Any tips?

]]>
http://blog.makerlab.com/2009/08/programming-graph-theory-and-a-request-for-help/feed/ 4
Olson code timezones geekery http://blog.makerlab.com/2009/05/olson-code-timezones-geekery/ http://blog.makerlab.com/2009/05/olson-code-timezones-geekery/#comments Wed, 20 May 2009 20:21:01 +0000 http://blog.makerlab.com/?p=712 Geek warning: This post is a bit of a total geek out but in any case here’s a bit of code that some of you might find useful. For the rest of youzes here is a pretty picture:
mapserv31

Yesterday I ended up writing a small piece of code to determine what timezone code a person is in.  This was not as trivial as it sounds.

In fact it was a learning experience about how complex our organization of time is – living on a sphere and all.  Timezone maps are not country maps, in fact time-zone maps are a complicated politic cleaving apart regions that have a minimum number of people – like some kind of crazy voronoi diagram of clusters of human populations.  Country boundaries play a strong role, but timezones are more like a mold that has grown over the existing history of a landscape. Alberta has a time zone that juts into Saskatchewan just to capture one town. Argentina slices timezones horizontally towards the South Pole to conserve sunlight for farmers. Chile doesn’t give a damn and puts all of Chile in one gigantic vertical strip of a timezone – so that in the winter it is dark at 6:00 in Santiago but sunny at 6:00 in Tierra Del Fuego. In other places of the world things like small islands show up more clearly than in country maps because each island tends to be a well defined separation of human populations and thus a good opportunity to have a single time zone code. A lost history is left traced in palimpsests here of residual boundaries.

voronoi are invading

help the voronoi are invading

If you look at the zoneinfo article on Wikipedia you can get a sense of what is going on here : http://en.wikipedia.org/wiki/Zoneinfo

The backstory is that the OpenBSD folks wanted help to automate configuration of timezones for installs. What we did was take the geolocation of their IP and use it to look up a timezone code.

There are 27000+ polygonal boundaries that make up the timezones. And there an odd 370+ timezones including all the various cases.

The goals were:

  • We wanted an extremely fast computation.
  • We wanted it to be static ( not require a database ).
  • Accuracy
  • Return Olson posix string like “Europe/Paris” .

I did fix one bug which was that in asking MapServer to spit out .png files it was palettizing the colors into an 8 bit deep image – whereas I needed 9 bits… So I had to ask mapserver to print out a tiff. If you see any other bugs….

Here are the source files in any case:

You can build it by typing

gcc timezones.c

And you can test it running with a longitude, latitude pair – for example:

./a.out -114 53

Which should return to you a string showing the time zone you are in.

The way it works is that I read in a timezone shape file from this place

http://koordinates.com/layer/751-world-time-zones/

This was piped to the following program:

http://civicmaps.org/maps/layers.rb

Which instructed my mapserver to generate a special kind of cloropleth map – which can be seen here ( but don’t bother because it chews my machine ) :

http://civicmaps_dontbother.org/cgi-bin/mapserv?map=/www/sites/civicmaps.org/maps/x.map&service=WMS&WMTVER=1.0.0&REQUEST=map&SRS=EPSG:4326&LAYERS=lowboundaries0&FORMAT=image/png&STYLES=&WIDTH=2048&HEIGHT=1024&BBOX=-180,-90,180,90

This data file can now be used as a bitmapped query interface for discovery of unique time zone codes as done above – or as done in ruby here :

http://civicmaps.org/maps/longlat.rb

You need ImageMagick installed.

If you wish, you can convert the image into a ppm file or something and embed it directly in the C program. Or memory map it and have a query gateway to it… that would be fastest.

]]>
http://blog.makerlab.com/2009/05/olson-code-timezones-geekery/feed/ 4
Reproducing Chuck Close isoluminant paintings. http://blog.makerlab.com/2009/04/reproducing-chuck-close-isoluminant-paintings/ http://blog.makerlab.com/2009/04/reproducing-chuck-close-isoluminant-paintings/#comments Mon, 20 Apr 2009 22:47:51 +0000 http://blog.makerlab.com/?p=694 The work of Chuck close demonstrates an almost algorithmic perception of the visual image. One wonders if in fact he actually sees the world in this manner. There’s a quality to this work that creates a tension with human perception. We are seeing the overall sense of an image but the individual features of that image are in a way unrelated.

chuck close closeup

What does it mean for a view to apprehend this image? At what level does a viewer understand or appreciate the emphasis on process? Does the viewer appreciate the aesthetics apart from the labor? How can a viewer appreciate the image more deeply?

For me I thought that if I tried to re-create the visual feel of a Chuck Close that I’d develop some understanding of his experience. I wrote a series of quick tests using the processing library. After many revisions ( which you can see on flickr I ended up with this result:

isoluminant paige saez chuck close

Although crude I felt this started to represent some of the understanding of his work. My understanding of his accomplishment grew. I also came across a great paper that looked at this in more detail – talking about technical qualities of Chuck Close work and doing a more accurate reproduction:

http://www.cs.princeton.edu/gfx/proj/isolum/

One of my earlier revs shows the amount of process I went through – first using random scatter plots, then trying to stay more on a grid, trying to introduce noise, and trying to work closer and closer to many of the attributes that Chuck Close exhibited. I found that it was difficult to make the computer generated image sufficiently dirty or noisy to approximate the feel of a real canvas.

paige isoluminant older test

My own source code was a more modest attempt and as usual here it is:

PImage a;

int w = 454;
int h = 480;

void setup() {
  size(w,h);
  background(0,0,200);
  colorMode(HSB);
  a = loadImage("/Users/anselm/p3.jpg");
  noStroke();
}

int x = 10;
int y = 10;
int sizex = 12;
int sizey = 12;
int count = w*h;

void draw() {
    x = int(random(0,w/10)) * 10;
    y = int(random(0,h/10)) * 10;

  count = count - 1; if ( count < 1 ) { count = w*h; }
  color c = a.get(x,y);
  float h = hue(c) + random(10) - 5;
  float s = saturation(c) + random(10);
  float b = brightness(c) + random(10) -5 + 20;

  sizex = 12;
  sizey = 12;

  x = x + int(random(0,3)-1);
  y = y + int(random(0,3)-1);
  fill(color(h,s,b,200));
  sizex = int(sizex - random(0,3));
  sizey = int(sizey - random(0,3));
  ellipse((int)x,(int) y, sizex,sizey);

  x = x + int(random(0,3)-1);
  y = y + int(random(0,3)-1);
  fill(color(h,s,b+100,200));
  sizex = int(sizex - random(0,3));
  sizey = int(sizey - random(0,3));
  ellipse((int)x,(int) y, sizex,sizey);

  x = x + int(random(0,3)-1);
  y = y + int(random(0,3)-1);
  fill(color(h,s,b+10,200));
  sizex = int(sizex - random(0,3));
  sizey = int(sizey - random(0,3));
  ellipse((int)x,(int) y, sizex,sizey);

  x = x + int(random(0,3)-1);
  y = y + int(random(0,3)-1);
  fill(color(h,s,b,200));
  sizex = int(sizex - random(0,3));
  sizey = int(sizey - random(0,3));
  ellipse((int)x,(int) y, sizex,sizey);
}
]]>
http://blog.makerlab.com/2009/04/reproducing-chuck-close-isoluminant-paintings/feed/ 1
@kissmehere and there! and there! and there! Kissing Booths FTW! http://blog.makerlab.com/2009/04/kissmehere_silly_a_new_twitterbot/ http://blog.makerlab.com/2009/04/kissmehere_silly_a_new_twitterbot/#comments Mon, 13 Apr 2009 22:19:32 +0000 http://blog.makerlab.com/?p=675 makerlab we were talking about dating sites a bit - commenting on how strange it was that they didn't leverage social networks. For fun today I threw a fun idea together as a test of how to make dating more social. It isn't terribly serious but perhaps amusing. I like to combine talk with praxis. Here it is:]]> Last night at makerlab we were talking about dating sites a bit – we had some fun thinking about the different ways that we could imagine designing a simple tool for twitter. We decided we wanted it to be really really simple.

Like REALLY REALLY simple. Like this:

twitter.com/kissmehere

Kinda like a kiss. Yeah, just like that. Like a kissing booth! JUST like a Kissing Booth!

kisses!

kisses!

The way this all works is that when you send a message to @kissmehere on twitter and you include the name of some people, it will send a kiss to all those people. For example:

@kissmehere go kiss @paigesaez @zephoria @soycamo @semaphoria @anselm

Kissmehere is no prude, you can kiss more than one person at the same time – or kiss only one person – it is up to you.

Kissmehere maps your kisses too!

MakerLab Kiss Map!

@kissmehere

@kissmehere

How did I build it? This is just a riff on the same twitter code I have been using before. There are a few twists. First we have some pre-amble and we have a geocoding engine – thanks MetaCarta!!!:


require 'rubygems'
require 'dm-core'
require 'twitter'
require 'net/smtp'
require 'net/http'
require 'uri'
require 'json'
require 'dm-core'

#
# passwords
#
TWITTER_USER_NAME = "kissmehere"
TWITTER_PASSWORD = ""
METACARTA_USERID = ""
METACARTA_PASSWORD = ""
METACARTA_KEY = ""

#
# a very very nice metacarta utility to brute force discover location in text
#
def geolocate(location)
  location = URI.escape(location, Regexp.new("[^#{URI::PATTERN::UNRESERVED}]"))
  # location = URI.escape(location)
  host = "ondemand.metacarta.com"
  path = "/webservices/GeoTagger/JSON/basic?version=1.0.0"
  path = "#{path}&doc=#{location}"
  data = {}
  begin
    req = Net::HTTP::Get.new(path)
    req.basic_auth METACARTA_USERID, METACARTA_PASSWORD
    http = Net::HTTP.start(host)
    #if response.is_a?(Net::HTTPSuccess)
      response = http.request(req)
      puts response.body
      data = JSON.parse(response.body)
    #end
  rescue Timeout::Error
    # DO SOMETHING WISER
    return 0,0
  rescue
    return 0,0
  end
  begin
    lat = data["Locations"][0]["Centroid"]["Latitude"]
    lon = data["Locations"][0]["Centroid"]["Longitude"]
    return lat,lon
  rescue
  end
  return 0,0
end

We have a simple data model as usual to track our activity… again using datamapper which is a favorite of mine.


#
# Only send out 10 tweets at a time
#
twittercap = 10

#
# Grab a database
#
DataMapper.setup(:default, {
    :adapter  => 'postgres',
    :database => "kissmehere",
    :username => '',
    :password => '',
    :host     => 'localhost'
})

#
# here is our schema
#
class Kiss
  include DataMapper::Resource
  property :id,          Integer, :serial => true
  property :provenance,  Text
  property :uuid,        Text
  property :title,       Text
  property :link,        Text
  property :description, Text
  property :screenname,  Text
  property :userid,      Text
  property :location,    Text
  property :lat,         Float
  property :lon,         Float
  property :secret,      Integer, :default => 0
  property :friended,    Integer, :default => 0
  property :kissed_at,   DateTime
  property :created_at,  DateTime
end


We have the usual twitter gem code to peek at the twitter state. I am really starting to wonder how the heck twitter even stays up with the amount of traffic it is getting… In any case mine is not to worry but to do!


#
# Remember kiss requests
#
twitter = Twitter::Base.new(TWITTER_USER_NAME, TWITTER_PASSWORD )
twitter.replies().each do |twit|
  uuid = "#{twit.id}"
  kiss = Kiss.first(:provenance => "twitter", :uuid => uuid)
  next if kiss
  secret = 0
  secret = 1 if twit.text[/ secret/] != nil
  lat = 0
  lon = 0
  if twit.user.location && twit.user.location.length > 1
    lat,lon = geolocate(twit.user.location)
  end
  kiss = Kiss.create(
             :provenance => "twitter",
             :uuid => uuid,
             :title => twit.text,
             :link => nil,
             :description => nil,
             :screenname => twit.user.screen_name,
             :userid => twit.user.id,
             :location => twit.user.location,
             :lon => lon,
             :lat => lat,
             :secret => secret
          )
  kiss.save
  puts "Saved a kiss on twitter! #{kiss.userid} #{kiss.title} #{kiss.lat} #{kiss.lon}"
end



Next we want to respond to kisses in an intelligent way; telling everybody, friending new friends and all that kind of fun stuff.


#
# Pass new kisses onwards ( only do twittercaps worth )
#
@kisses = Kiss.all(:order => [:created_at.desc],
                   :limit => twittercap,
                   :kissed_at => nil
              ).each do |kiss|

  # tease each kiss apart for multiple receivers
  kisses = kiss.title.scan(/\@\w+/)
  kisses.each do |luckyduck|
    next if luckyduck == "@kissmehere"
    if kiss.secret == 0
      kiss.link = "http://twitter.com/#{kiss.screenname}/statuses/#{kiss.uuid}"
      gossip = "#{luckyduck} got a kiss from @#{kiss.screenname} - see #{kiss.link} "
      # if kiss.lat != 0 && kiss.lon != 0
      #  gossip = " - #{gossip} near #{kiss.location}"
      # end
    else
      kiss.link = nil
      gossip = "@#{luckyduck} got a kiss from an anonymous admirer!"
    end
    kiss.description = gossip
    result = twitter.post(gossip)
    puts "Told everybody #{result} of #{gossip}"
  end
  if kisses.length == 0
    puts "No love from #{kiss.screenname}"
  end
  kiss.kissed_at = DateTime.now
  kiss.save

  # friend everybody - could improve this
  begin
    twitter.create_friendship(kiss.screenname)
  rescue
  end
  kisses.each do |luckyduck|
    begin
      #if twitter.friendship_exists(TWITTER_USER_NAME,luckyduck)
      twitter.create_friendship(luckyduck)
    rescue
    end
  end

end

Finally we write out an RSS feed for Google Maps – thanks @ajturner for the quick tip. I wasn’t able to get ruby rss maker to do anything useful such as allow me to specify custom namespaces for the geo:lat and geo:long attributes so I wrote everything by hand! By doing this we can then make a map page which has all the kisses on it just for fun. I guess I won’t show this blob because it breaks the layout engine in wordpress… I will link to the original file however at agent.txt

That’s it. Have fun out there in the twitter verse!

]]>
http://blog.makerlab.com/2009/04/kissmehere_silly_a_new_twitterbot/feed/ 2
Iraq Deaths TwitterBot http://blog.makerlab.com/2009/04/iraq-deaths-twitterbot/ http://blog.makerlab.com/2009/04/iraq-deaths-twitterbot/#comments Thu, 02 Apr 2009 18:03:17 +0000 http://blog.makerlab.com/?p=668 We’ve posted an update to our Iraq Deaths agent at http://twitter.com/iraqdeaths . Here I’m going to journal and document the work involved in making this actually work.

At a high level this agent makes a daily post to twitter to broadcast any deaths reported by the Iraq War Bodycount database. I should mention that without their efforts at keeping and surfacing these records none of this would be possible. This kind of work itself is emotionally challenging and I want to applaud them.

Design-wise this was a project that Paige and I thought up and were excited by. We did it in a morning while we were both supposed to be doing other work – but the fact is it resonated with work we both care about.  Speaking for myself at Meedan we’ve been looking for ways to help social networks bridge the barriers of language and I saw this as a way to contribute to a project that helped people keep attention on something that is normally invisible.  For me this was a rewarding project because I enjoy removing the technical boundary between design and implementation. Often I like to work with designers to put real muscle underneath their vision.

The technical implementation consists of three stages

  1. collection
  2. analysis
  3. publishing

In the collections stage we talk to the iraq war bodycount database – pulling the entire corpus of deaths and then plucking them out of their csv structured text – like so:

require 'net/http'
require 'rubygems'
require "fastercsv"
url = "http://www.iraqbodycount.org/database/download/ibc-individuals"
data = Net::HTTP.get_response(URI.parse(url)).body
puts "fetched"
@results = FasterCSV.parse(data)
@deaths = []
inside_header = true
@results.each do |death|
 if death[0] == "IBC code"
   inside_header = false
 elsif inside_header == false
   @deaths << death
 end
end



After this stage we go and add any new data to our own database. In this way we keep a running track of any changes and can act only on changes rather than on all data I first attempted to use sqlite3 but then ended up using datamapper - like so:

require 'rubygems'
require 'dm-core'
DataMapper.setup(:default, {
   :adapter  => 'postgres',
   :database => "endiraqwar",
   :username => 'endiraqwar',
   :host     => 'localhost'
})
class Death
 include DataMapper::Resource
 property :id,         Integer, :serial => true
 property :code,       String
 property :name,       Text
 property :age,        Text
 property :sex,        Text
 property :marital,    Text
 property :parental,   String
 property :earliest,   DateTime
 property :latest,     DateTime
 property :location,   Text
 property :created_at, DateTime
 property :posted,     DateTime, :default => nil
end
# DataMapper.auto_migrate!
@deaths.each do |death|
 if Death.first(:code => death[0] )
   puts "We already found this death #{death[1]} #{death[0]} so not saving"
   next
 end
 # take a second to convert the date phrase into a machine date
 death[6] = DateTime.parse(death[6])
 death[7] = DateTime.parse(death[7])
 record = Death.new(
             :code => death[0],
             :name => death[1],
             :age => death[2],
             :sex => death[3],
             :marital => death[4],
             :parental => death[5],
             :earliest => death[6],
             :latest => death[7],
             :location => death[8]
          )
 puts "recording the passing of #{record.name} at #{record.earliest} and #{record.code}"
 record.save
end



The last phase is to report the actual deaths. We rely on the twitter gem to do this - I find I am using this gem more and more and it is quite convenient - like so:

twittercap = 50 # twitter this many posts max
require 'twitter'
twitter = Twitter::Base.new("iraqdeaths",secret_password)
@deaths = Death.all(:order => [:earliest.desc], :limit => twittercap)
@copyofdeaths = []
@deaths.each do |death|
 @copyofdeaths << death
end
@copyofdeaths.reverse.each do |death|
 # publish deaths that are new
 next if death.posted != nil
 result = twitter.post("#{death.name}, #{death.age}, #{death.sex}, #{death.marital}, #{death.parental} killed on #{death.earliest.strftime("%d %b %Y")} at L:#{death.location}")
 # remember that we already published this death
 death.posted = DateTime.now
 death.save
 puts "posted the death of #{death.name} #{death.code}"
end



I try to be very careful to never update my own understanding of the database record until I am absolutely certain that Twitter has been updated. Even if my agent crashes I want it to crash in a way that doesn't spray garbage all over twitterspace.

Overall, as you can see, the high aspiration of building something that makes a statement is connected to real metal underneath. We're grateful to the http://www.iraqbodycount.org organization for making this possible.

]]>
http://blog.makerlab.com/2009/04/iraq-deaths-twitterbot/feed/ 0
Biomimetic signaling in Twitter http://blog.makerlab.com/2009/02/biomimetic-signaling-in-twitter/ http://blog.makerlab.com/2009/02/biomimetic-signaling-in-twitter/#comments Fri, 27 Feb 2009 03:48:18 +0000 http://blog.makerlab.com/?p=663 Biomimetic Signaling in Twitter

http://twitter.com/anselm

http://twitter.com/meedan

http://twitter.com/makerlab

In Vonnegut’s futuristic dystopia, the Handicapper General uses a variety of handicapping mechanisms to reduce inequalities in performance. A spectator at a ballet comments: “it was easy to see that she was the strongest and most graceful of all dancers, for her handicap bags were as big as those worn by two hundred pound men.”  [ http://en.wikipedia.org/wiki/Signalling_theory ]

Chapter 1 : The beasts move as one

Anselm: Have you noticed how it is that the beasts in nature can in some circumstances appear to move as one?

Socrates: Indeed. One has to look no further than to witness the swifts descend upon the Chapman chimney tower in Portland.

Anselm: Yes the swifts are a good example. They follow each other and swarm in patterns that clearly indicate an awareness of each other. There is some kind of signaling going on between them which coordinates their actions and makes them appear as if they are all components of a larger organism. As well in Portland many of the computer geeks themselves exhibit a similar swarming behavior using technology. I don’t think there is anything new under the sun. I propose that by observing how animals communicate in general that we can draw parallels to better understand the deeper roles that technology is manifesting.

Socrates: Certainly as it is in nature so is it in human behavior.  We are a part of nature and nature is a part of us and we cannot stand outside of that ‘wheel of life’ as the Buddhists like to say. Our cognitive wings let us fly at a greater elevation above the landscape but nature is grander than we can comprehend even so.

Anselm: Furthermore I’d like to focus primarily on Twitter.  Twitter itself is a very simple technology, letting people share brief messages in a public way about where they are and what they are doing. It crosses showing off with people watching.

Theodorus: Did you just say Twitter?! Nobody uses Twitter for anything serious. It is at best a distraction, at worst a plague.

Anselm: True Twitter may yet prove to be the digital age’s version of the sitcom. Nevertheless animal signaling theory has recently become popular among anthropologists as a way to study human communication [*]. And in that light I see Twitter as a form of biomimetic signaling. It is starting to emulate the kind of subtle, gestural and soft signaling patterns that we see in nature. If the metaphor holds true then by looking at nature we can gain insight into where our digital savannas are taking us.  [ * http://ssi.sagepub.com/cgi/content/abstract/44/4/603 ]

Socrates: There may be a kernel of truth here. It is supported by others such as Rheingold who have already noted flocking behavior among the digerati.

Anselm: There’s an emerging degree of swarming, coordination, just in time planning, that makes a cohesive group appear to exist out of a series of autonomous individuals. I’ve personally witnessed people get rides from airports at midnight after the transit stopped, people collectively swarm to try track down a stolen bicycle, venue changes for meetings where nobody misses a beat, random get-togethers facilitated by a real time awareness. There is a kind of real time responsiveness not present with services such as email, the telephone, the classifieds or even newspapers and television mass media.

Theodorus: But if you look at how people tried to use Twitter for something serious – the Mumbai crisis [*] being an example – it had almost no impact. [ * http://www.informationweek.com/blog/main/archives/2008/11/twitter_in_cont.html ].

Anselm: Yes but where were television and other media at this time? They were not even reporting the event till hours later. Twitter is closer to a natural signaling pathway between peers. A near real-time operator-in-the-loop opportunity exists. There’s a possibility of not just consuming the big events but participating in them and understanding them. And that possibility is new.

Signaling

Theodorus: You’ve been using the term ‘signals’. What kind of signals do you mean to speak of?

Anselm: By signals I mean not just say voice, or text, or the shriek of a falcon for that matter. Signaling includes any and all marks or signifiers embedded in the plainly visible world. For mice and men the world is decorated with placed hints that aid in navigation through it. Droppings, trails, marks on trees, the presence or absence of others, their haste or sloth. These all are aspects of a voice that can be used to communicate polyphonically across many media.

Socrates: Are all signaling mediums equal or are there divisions that we can apply?

Anselm: The choice of which medium to use depends on the goals but I propose a few core criteria namely 1) secrecy, 2) fidelity, 3) volume, 4) persistence. Creatures great and small often want a semi-public boundary of privacy where our messages are visible to specific peers. They may want to signal one message to predators or symbotic species, and a different message to other members. As well the choice of medium depends on the environment. Clearly for example the vibrancy or color of a rich coat of feathers is not visible at night – so to signal vigor requires perhaps an audio based medium.  There may be a desire to communicate clearly; more than simply “I am here” and the fidelity of the medium, it’s ability to clearly carry the message, may affect the choices. Volume itself; who can hear the message, may affect the medium. And finally some mediums afford a longer term persistence.

Socrates: What kind of mediums then do you see?

Anselm: I would roughly classify signaling mediums into 1) transient and 2) durable.

Socrates: Of the transient flavor then are there any different kinds of mediums? For certainly worms communicate differently than birds do they not?

Anselm: I would suggest that we define transient media just using the five basic senses of sight, sound, touch, smell and taste. A bird will flash the tip of its wing to signal that it is turning left or right. Carrion will circle over the site of a likely meal. Dinoflagelletes will bioluminesce in ocean tides (for reasons not entirely understood). Plants practice interspecies signaling using appearance, odor and taste. Grooming and nit picking instincts are employed by chimpanzees to broker peace.

Bioluminescent Dinoflagelletes surround a swimmer. Alpine berries signal readiness by color and taste.
Socrates: Then of durable signaling – are there any divisions that can be made? For is writing the same as leaving a message that a friend later repeats from memory? Bees can carry information about where nectar is by a ritualized spatial physical grammar; or in other words dancing. Human gossip networks spread awareness and information (accurately or inaccurately) quite effectively as well. How would you classify such distinctions?

Anselm: Of the durable flavor I would say there are not divisions but simply variations of a similar theme.  Any durable markings left persistently upon some medium. I admit it is worthy of note that marks, signals, scratches and the like can be imprinted on intelligent agents themselves but I don’t see that distinction as critical.

Theodorus: To be practical, where do you see evidence of signaling then in human communities?

Anselm: Well, Individual appearance and clothing style choices are strong indicators of social alignment. Using Portland as an example again personal tattoos are deeper signaling commitments. Individuals use private email for discretion but we also see the use of public blogging and indeed Twitter. Graffiti and advertising of course fit into this as well as standing on the corner with a bullhorn or dancing in a parade. Of course we know this from Marshall Mcluhan and others; how the mediums have a plasticity that changes in response to their loads, their equivalence and indeed their commonplace ubiquity. It is as if we are all constantly “Helen Kellering” our way through the world; adapting to new ways of being informed on an ongoing basis.

Heterogony

Socrates: Do you mean to indicate that all people are equal then?

Anselm: Yes. Let us start with the proposition that we’re all equals.  We are all people who attempt to communicate with each other.

Theodorus: Equal? I don’t see how I am equal with say large advertising firm such as Weiden-Kennedy. They have more money, more time, and more attention devoted to making their message heard.

Socrates: True. Communication arts are practiced daily by Weiden-Kennedy and by many others – even individual artists who exhibit at events such as First Thursday or Last Thursday or any of the numerous gallery art openings. These people put more resources into their messages than do most of the rest of us.

Anselm:  True. Some people do put a lot of effort into their messages. I suppose we call this advertising at a certain point. But advertising is more often hit than miss – nobody really understands the human mind yet.

Theodorus: Still one can scarcely say that these actors are equivalent. If some can devote more reasoning to their message then time itself becomes a barrier that makes us different.

Anselm: True. But it feels like we are just leaving the age of industrial broadcast media. I agree that one of the key characteristics of the previous empire was the ease with which our attention and our energy could be diverted for private gain. Advertisers successfully steered the flocking behavior of large segments of the population for worse. I am not certain that it will hold true much longer however.

Socrates: This is deeper than simply individuals. It is deeply woven into the legal and accepted definition of western culture itself.  Our concepts of the ownership of space, and appropriate use of space are largely owned by privatized interests. We accept that it is legal and accepted to place large billboards that capture our attention. We accept that it is illegal for an individual voice to “vandalize” equivalent space with graffiti. In fact almost all physical urban space is actually private. There is no real place to rest ones eyes or ones body. The illusion is provided as long as one doesn’t attempt to stand still.

Anselm: I must concur. I’ve often felt that if beauty is in the eye of the beholder then at least some portion of the value should belong to the beholder. Yet we see a tenacious externalized ownership of social objects even though these become part of the consumers cognition. It creates a space of false signals that reflect wholly accepted yet corrupted pseudo-truths. Baudrillard makes the point in Simulacra and Simulation that we don’t even know where truth is anymore in fact. This is not even a question of truth being relative and contextual but rather simply being arbitrary and unconnected to anything.

Theodorus: If you really believed this then you would actually do something about it.

Anselm: Certainly, but this implies a value judgment of deeply ‘right’ or deeply ‘wrong’. I’m not entirely certain industrial media is ‘wrong’ in the sense that it should be ‘stopped’. Yes it is ugly, and true all value is aesthetic, but the advertisers message is something we are now becoming inoculated against. In surmounting that obstacle we ‘the people’ have evolved. True many beautiful and transitory communication art forms do not exist that could exist. This is life. I do feel that attention economics is like real economics – unpredictable. Attention is that most rare of beasts, fooled once but learning quickly and innoculated quickly; in almost a Jungian manner.

Socrates: Not only are well funded interests able to make their own messages most loud but they’re also able to listen to us much better than we can listen to each other. One need look no further than ventures such as http://www.scoutlabs.com/ and http://newmediastrategies.net/ to see how social signaling or sentiment tracking is a core part of day to day business analytics. Projects such as http://microplaza.com also deserve consideration. Even consider projects such as http://www.twitalyzer.com/twitalyzer/profile.asp?u=anselm&p=27 on the personal side.

Anselm: True. It’s clear that there is an inequity between participants and that there is some value, dollar or otherwise, in capturing attention. Let us acknowledge then that there are different kinds of participants ranging from small to large. In some senses the larger participants are the predators or at least symbiants, benefiting from the actions of the smaller predators. And perhaps they should be distinguished as such.

Growing Pains

Socrates: In what ways does Twitter permit signaling?

Anselm: I’ve proposed that Twitter is a form of biomimetic signaling, in that it emulates patterns in nature, but in fact to be more clear it is more akin to a nervous system that is incomplete. And in this light I would like to defend it not for what it is but for what it could be.

Theodorus: Why defend Twitter? It is just one more walled garden – another ‘latest internet craze’ as the BBC put it. There are many of these.  Do you mean to defend them all?

Anselm: I feel Twitter is one exemplar. And I feel that there is a backlash against technology in total even though technology is simply surfacing epiphenomena that exist anyway. For example human communities engage in a certain volume of channel maintenance; where any random traffic is sent across the channel simply to keep it open. This annoying bubbling of “social trivia” is often lampooned but it is critical to making sure the channel is there when it is really needed. Noise is a deliberate artifact of human behavior in general.  Knowing that somebody peed means knowing that they are alive and can hear you.

Theodorus: Nevertheless why Twitter? Why not FaceBook or why not other services foremost?

Socrates: Indeed I agree with Theodorus. Twitter is just one signaling mechanism. For example recently a bay area tsunami warning alert system became more visible after it failed for better or worse [ * http://bit.ly/QttaQ ].

Anselm: Twitter is best known to my community so it serves best. And Twitter messages are closer to the atoms of social networks than Facebook messages. They are minimum sized “social objects”. Even the connections are one-to-one without any group concepts at this level. And messages themselves are just text, there are almost no special “powers” associated with a message such as you might find on other systems.

Socrates: Can you elaborate on this?

Anselm: In Twitter human agents have to type “rt” by hand to forward a message through the network – there is no special button called “rt” and there is no button called “thumbs up”. This makes Twitter simpler. What happens is that the weight of user needs shift into the grammar rather than through special features built into the framework. In the grammar one issues a message to say @anselm or issues a “leave” request. Like Rael Dornfest’s IWantSandy [ * http://iwantsandy.com ] project the burden is shifted into a more natural human dialogue. The same sense of a single input box is also visible in the new Firefox Ubiquity project [ * http://connect.educause.edu/blog/rmcdonal/ubiquityforfirefoxprettya/47246?time=1235691044 ]. This gives the environment a greater composability at some computational cost. In fact for this reason Twitter is a kind of universal solvent: it sits underneath other services. It is dissolving away other services that are too baroque.

Theodorus: Nevertheless Twitter itself has so many deficiencies. It doesn’t have group concepts – an idea that was already common in IRC and ICB a decade ago. It is hard to filter noise. It is a silo. It is crash-prone. It is hard to hear whole conversations. It is hard to have history. There is no way to subscribe to a geography. More deeply, it can be demoralizing – it is in many ways an ego game. And we don’t even know how the marketers are going to be exploiting this medium yet – or the spammers.  It seems like the only thing it can coordinate is a pillowfight – the minute it is used for anything dangerous the status-quo will turn it off. The biggest problem is that it is so full of noise that in order to consume it you need third party analytical tools.

Anselm: Granted. In these cases Twitter is the exemplar and it stands for the whole. It’s deficiencies are similar to deficiencies of other services and these separate pools will likely merge into a single view. My concern is that like all new things it has many weaknesses but they distract from where the future is leading. It is precisely to better analytical tools, but ones that are more evenly distributed that the future leads.

Curation

Theodorus: Can you provide more detail on where you believe the future is leading?

Anselm: In nature Biologists use the phrase “honest signals” and the phrase “dishonest signals” to distinguish between kinds of animal signaling. I feel that an idea of human curation could help improve the presence of “honest signals”. For example thomson gazelles engage in stotting when being chased by predators. They run in vertical bounds that actually make it easier to catch them – but at the same time the vigor of their jumps may indicate to predators that they are not going to be easy to take down. [ * http://www.springerlink.com/content/w58k120n74033231/ ]. Stotting is hard to fake and therefore is an honest signal. If you could select for the human equivalent of the ‘stotting’ channel then you’d have access to a true read on a situation.

Theodorus: What is the benefit of more honesty?

Anselm: Insofar as critiques of Twitter we probably all acknowledge that the noise to signal ratio is indeed unbearable. Honesty in this sense is one part of a noise reduction strategy because it can be hard to fake and therefore expensive and therefore less common.

Theodorus: Well a private network would be even better – there would be no noise at all.

Anselm: Yes admittedly true. But being open points to an additional quality. Proponents of privacy argue that sharing information is a liability because predators or destructive forces in general can exploit the very communication pathways to find and take advantage of individuals. The term radio-silence derives from this fact. But an open communication network can be faster than predation on individuals. The network can signal very quickly, and the network itself can be improved by critical analysis. The word open is crucial because it has to be easy to access. Let us say that predators bring more computational power and analytics to the table. The flocks of lesser beasts bring a distributed network of sensors and computation to literally out-compute the situations in real time.

Anselm: As another parallel to nature consider frog cacophony. Frog cacophony in nature is designed to bewilder predators while allowing frogs to signal each other. The timing of these signals is crucial and in fact when airplanes fly overhead they disrupt the cacophony and allow predators to more easily pick out the location of frogs. Here the frogs are operating in the open but predatorial forces cannot easily exploit the signals. What is signal to the frogs is noise to the predator.

Theodorus: These signals seem awfully arbitrary.

Anselm: True, they are ritualized over time and usually side-effects of more pragmatic behavior. Narrowing the eyes and flattening the ears is a practical defense to protect the eyes and ears but it is now also a signal. Some of these signals are loud, visible to everybody, others are ‘conspiratorial whispers’ where the signaler and the receiver try to conceal the signals from third parties. There’s a concept that we see in stotting called Zahavi’s handicap principle – that in order to be honest a signal must be costly to the signaler. Peacock tails being an example. And of course we see mimicry of any signal; poisonous frog coloration being a good example of a predator defense.  [ * http://www.sparknotes.com/biology/animalbehavior/signalingandcommunication/section1.html ]

Theodorus: What are predators in human systems?

Anselm: The term predator is perhaps loaded since it implies ‘bad’. I tend to think more of exploitive forces using energy for their own benefit but improving the fitness of the system. For example in nature mosquitos can smell carbon-dioxide and use that to find sources of blood. This is predative behavior that doesn’t necessarily kill the host but it’s an example of loads that the host carries. In human networks this could be anything from a brand such as Nike trying to create visibility for some arbitrary new style of shoe, to a banking institution trying to determine if they can get away with unusually high account fees. There are also large natural forces such as environmental change due to global warming, and the attendant pressures on food supplies and a resultant rise in father-knows-best style autocratic decision making.

Theodorus: Well, Twitter and like system’s don’t really deliver on these ideas. They are noisy and don’t do any particularily good job of making important facts available.

Anselm: There is a telios at work here. The future is drawing us towards bigger networks. The pressure is environmental, political and social; we need to “become bigger” because our social networks are larger, we are more mobile and indeed the problems we see are larger and swifter. Bigger means more noisy. But this isn’t by defacto an argument for smarter algorithms to search or cull data. People could indicate which media is worthy simply by the attention they pay to it. In human communities we can gaze in a direction and other people will follow our eye. That signaling behavior is unconscious but functional at steering attention.

Theodorus: How can this work on the Internet?

Anselm: We need an ability to formally “curate” which signals are worth listening to. I chose the word “curate” to imply a human mediated approach rather than a technology mediated one. There are millions of twitter channels, and hundreds of thousands of ideas, links, posts, articles pushed through the network every day. But this is no more challenging than the real world which has an equal density of objects. Many Twitterers are just noise, but some are specialists, there are outlets for utilitarian information and different kinds of communities define utility in different ways. Curation simply recognizes that there’s a matchmaking, ranking, scoring, categorizing and brokerage role that some participants in Twitter can perform for other participants.  A value chain of different kinds of participants can then emerge.

Theodorus: When you say formal curation what do you mean?

Anselm: The curatorial role is one of exploring the raw data and marking and categorizing worthy material. A curator needs to be able to filter the data by six criteria: 1) subject 2) location 3) time 4) trust network 5) novelty and perhaps also possibly 6) language. The actual interface, such as say offered by http://search.twitter.com has to allow cleaving along these different criteria. And then it has to be possible to clump and aggregate results into buckets, and to up-score and downscore content.

Theodorus: So is this all about better search? Better search would make Twitter better?

Anselm: No, the search role is a curatorial and editorial role. Consider newspapers. A newspaper such as the New York Times has a staff of editors who in a sense curate what the readership is going to read. The readers flock to the newspaper based on their values, but they don’t pick the articles themselves. What’s missing in our system is a way to search the space from an experts vantage point in order to find the content that people will want to read.

Theodorus: But people have different criteria all the time. They move, their interests shift, they have new interests.

Anselm: Indeed. And subscription itself should be dynamic. You should be able to listen to a specific geography – and have that geography follow you around as you move [ * http://twitter.com/caseorganic ] . The curatorial role is not restricted to the curator – it is just aimed primarily at people who want to put the energy in – and intended to benefit everybody. If you cannot set boundaries or filters then you can end up with something that is more intended to be serendipitous art than pragmatically functional such as http://twittervision.com [ * David Troy ].

Socrates: Such as it is in nature; birds can selectively listen to the channels they understand and ignore the irrelevant?

Anselm: Yes. The key draw is that Twitter and like systems could offer the potential to allow us to as a whole to react instantly and simultaneously to the signals of other agents if we could just find the emitters that were relevant. This isn’t just crisis response but every day opportunity. A well culled set of emitters can provide awareness about a specific topic extremely quickly to a wide community – including persons who are not explicitly listening to that emitter as a first order relationship at all times.

Examples of Curation

Theodorus: So how does one build such filters?

Anselm: Today there’s no way for the community to dynamically and collectively build the filters that we want. Twitter lacks group concepts so there is no natural way for clusters to emerge in an authoritative way yet. And searching in Twitter is just beginning to improve. But we can look at examples of primitive attempts to manually build such filters and we can use this as an example of how such curation might exist.

Theodorus: Ok, then, what are some examples of curated sets?

Anselm: There are collections such as Britta Gustafson mentions at http://twitterpacks.pbwiki.com/ , http://delicious4teachers.pbwiki.com/ and the like. Of note here is one kind of manually built set of particular interest – location : http://twitterpacks.pbwiki.com/Twitter%2BPack%2Bby%2BGeographic%2BLocation . As well google readily returns a few more mainstream collections which I will repeat here for discussion:

Mainstream Green Twitterers [ * http://www.huffingtonpost.com/2008/08/19/best-green-twitter-feeds_n_119694.html ]

Mainstream News Twitterers  [ * http://my-creativeteam.com/blog/?p=694 ]

Mainstream Tech Twitterers [ http://blogs.zdnet.com/BTL/?p=12041 ]

(An older list of) Portland Tech Twitterers [ * http://siliconflorist.com/2008/03/19/portlands-top-30-tech-twitter-ers-1-may-surprise-you/ ]

Mainstream Comedy Twitterers

Theodorus: That may seem like a lot of sources but the reality is that it doesn’t even to begin to reflect the diversity of values and interests that people have. Consider bicyclists, Baconists, Wiki fanatics, Data visualization folks, Artists, Musicians, Foodies, Furries, Parents. And as well in almost all of these cases there is a strong desire to filter geographically.

Anselm: True. Hand curation in this way is inefficient. That’s the whole point. There needs to be a way to do this using the power of the community.

Socrates: Twitter themselves have started to offer a suggestion service for people to listen to. What is needed beyond this? [* http://latimesblogs.latimes.com/technology/2009/02/twitter-suggest.html ]

Anselm: It is too hard to subscribe to individuals. I believe that there must be ways to subscribe to classes of signals en-masse. In mediums where group concepts are supported this is much easier.  The email mailing list ‘geowanking’ is a good example of an authoritative single subscription point that gives you a best overview of the entire social cartography scene. In a strong sense I see this a parallel to a better vision. If we can see the data better then we can choose which data we are most interested in.

The future

Theodorus: Let’s pretend that you had a team of people who would scour twitter for you and return to you exactly that set of twitters and say twitter people that exactly match your current interests – what real benefit or difference would that make for your everyday life?

Anselm: Well obviously as you state – a real signaling system will tell people what they need to know just in time. It would most likely tend to reflect real local concerns and local geography therefore. This is not entirely dissimilar from social cartography projects [ * http://www.nytimes.com/2009/02/17/science/17map.html  ] but with a much higher emphasis on a data driven approach. It is also not dissimilar from services such as Craigslist except for this aspect of being real-time. It’s just that with a phrasing around real time, curation and mobile access it would be a qualitiatively different experience.

Theodorus: Why don’t such solutions exist yet?

Anselm: http://ushahidi.org is a good example of the non-trivial challenges. Ushahidi is an emergency response solution for crisis situations such as floods, earthquakes or conflict. The same problems that Twitter is encountering are evident in Ushahidi. Chris Blow and Kaushal Jhalla of Ushahidi have started on looking at ways to build a filtering system around these same data collection problem in fact [ * http://blog.ushahidi.com/index.php/2009/02/04/crisis-info-crowdsourcing-the-filter/ ]. Chris talks about the difference between “database barf” and human curated collections that are sensitive to subtle human concerns.

Socrates: Well, this has been an interesting discussion.

Anselm: Overall my hope here is to simply draw attention to numerous signaling parallels between human and animal populations. I hope that by thinking of digital media not as some kind of new space, but as a variation on existing spaces, that we can dispel some of the new age kind of response to new media and simply recognize it as just another part of our world.

]]>
http://blog.makerlab.com/2009/02/biomimetic-signaling-in-twitter/feed/ 4