The tech world and consumers at large have been buzzing amid recent reports/leaks which indicate that Google will, in the next year, come out with smartphone-esque glasses. Apparently, these devices, often dubbed “Terminator” glasses after the cyborg technology portrayed in the 1980s classic film by the same name, will overlay the physical world with digital data—augmenting our practices of looking.

The technology differs little from existing smartphone apps that overlay images of the physical world with bits of information. The difference, and it is a big one, is the embodied integration of this particular device. Controlled by movements of the head and face, the Google glasses are designed to fit seamlessly into movements of the organic body. To look with this device is to see an augmented world. The eye takes in not only the reflections of light that make up the image, but also the data which strategically and purposively contextualize it.

This increased integration of physical bodies and digital technologies similarly integrates physical spaces and digital information. No longer will brick and mortar be the counter to web-based locations. Rather, brick and mortar will act as the base upon which digital data is written, and digital data will become part and parcel of the architecture of the physical space—an architecture that can be updated in real time.

Although Google reps do not, at this point, expect people to wear the glasses continuously, we can quite easily imagine a near future with full digital augmentation of organic seeing (i.e. Google type glasses worn all or most of the time). What implications might this have for commerce, privacy, meaning making, and mental processing?