A few thoughts on Adam Greenfield's survey of the now-and-future landscape of ubiquitous computing. Concise and well-written, though not in-depth, I would add a few points to his assessment:

On Multiplicity: the more mundane aspects, such as multiple systems knowing which are being addressed, are certainly valid engineering problems. For the issue of multiple conflicting orders (or preferences) being simultaneously delivered to a single system, I also see cause for some playfulness. Who decides a room's temperature, lighting, or mood music? Depending on the participants and the forum, this could become a metagame of its own. The interaction between individuals' static preferences and the system's processing rules could lend itself to enough fun by itself: Gamers might yield control to the player with a high score; businessmen to recent sales...the list goes on. But add the ability to incorporate performance, realtime feedback, and 'dialog' between systems, and the possibilities for a bit of fun are endless.

On The Inescapability of One's Own Datatrail: Greenfield gives not even a passing mention to the ability to create multiple digital personas--just as most of us do now--which remain linked to each other only to the degree we explicitly allow. There's no reason why every ubiquitous system should recognize (and correlate) our behavior with every other system's recognition of us as an entity.

Two statements he makes that apply just as well to non-ubiquitous design:

"Everyday life presents designers of everyware with a particularly difficult case because so very much about it is tacit, unspoken, or defined with insufficient precision"

Social networking sites? PDAs? Maybe even personal finance software? The problem applies to these as well, to varyingly recognized degrees.

"How can we fully understand, let alone propose to regulate, a technology whose important consequences may only arise combinatorialy as a result of its specific placement in the world?" (emphasis added)

Ditto the above examples, and pretty much everything else in the world.

He also comments: "We will find that everyware is subtly normative, even prescriptive--and, again, this will be something that is engineered into it at a deep level." While there can be value in saying that certain specific technologies (and their implementations) are more or less normative than others, I would argue that for sweeping statements like this, that the more relevant truth is that humans are normative (and to a hopefully slightly lesser degree, prescriptive) beings.

If ubicomp allows us to monitor our blood glucose levels in realtime, then many people will monitor theirs obsessively--even moreso if they can compare it against friends, family, and coworkers. But this won't because of any presumption of the technology. The reason we don't so it right now is more likely because we can't rather than that we wouldn't want to. This line of logic leads to an entire new world of considerations, the 'tyranny of choice' and so on. But--in an ideal world--once designers have ethically designed the information architecture, access mechanisms and so forth to be morally unimpinging, their work is far from finished. Then begins the probably far more difficult job of weighing the implications of human nature, and redesigning with that in mind. But since this is not an ideal world, users will be exposed to poorly designed, ethically challenged implementations, and will have to deal, collectively and individually, with the results--just like we do today.