[Getting back to blogging Ubicomp from my notes. Unfortunately ran out of time while the conference was going on]
Location sensing was the first session of the day, and this is one of the most developed areas in ubiquitous computing. The first talk was on using signals injected into the power wiring of a building to sense location. The idea being that you attach a transmitter to normal home power wiring to inject some signals (earlier work used 2 frequencies, this paper was using wideband) that can be sensed throughout the house. Then you can map out the received values for those signals throughout the house, and then later use that mapping to sense where a receiver is located. The second talk was on using CDMA signals (from carriers like Sprint in the US) to locate users based on the signal delay. Using GSM and WiFi signals are already well established, apparently, but all three require mapping the signal strength/delay in the space to be used for location.
The best paper award for the conference went to Pedestrian localisation for indoor environments. The goal of this system is to be able to track pedestrians indoors using no infrastructure, unlike ultrasonic systems that can be extremely accurate, but require expensive transducers every couple meters. The system uses a combination of pedestrian dead reckoning and robot localization techniques. A shoe-mounted inertial sensor can detect the users steps, but accelerometers have drift that eventually creates too much error. That drift is compensated for using the robot localization technique. A model of the building’s floor plan is created or derived from blueprints, and given the motion information from the inertial sensor, guesses can be made on where the user is located in the building. Since the user cannot walk through walls, as they walk further the system uses a particle filter to winnow down the set of possible locations. It’s neat work, though it does require an accurate map of the indoor environment.
Another interesting paper in a completely different area is Towards the automated social analysis of situated speech data. The authors use automated voice recorders that record everything the subjects say, but then throw away the raw audio data and keep only information about which subjects talk to each other, and some variables about their speech (pitch, number of conversational turns, etc). Using this data they can create an automated sociogram based on who spoke to who.
In the security and privacy track there was a talk about the privacy problems associated with the fine-grained sensors used for activity sensing in the home. The authors point out that one can snoop on the wireless data transmissions, and even through they are encrypted, one can infer which sensors are being activated based on time of day and standard usage patterns.
The conference day ended with a Town Hall session. This was a discussion about the administrative aspects of an ongoing conference: location of next year’s conference, whether to be sponsored by ACM, straw polls on whether to go all digital for the proceedings, etc. It was quite cool to see the internal aspects of the conference discussed in an open and somewhat democratic forum.