Sensors “Visceralization”

  • Publish On 18 November 2017
  • Joseph Paradiso

Urban responses to environmental issues are split between advocates of a return to/of nature and those who promote the technological solutions of the smart city, based on sensors and data. Joseph Paradiso, Director of the Responsive Environments Group at MIT, studies the interactions between individuals and computing technology. He explains how portable electronic sensors known as wearables allow access to a set of data that modify our experience of space and profoundly impacts the built environment. Electronic interfaces autonomously determine our needs, permitting the optimization of comfort and energy consumption. He sees a world of information becoming established in the real world, articulating wearables in real time with the general digital infrastructure. Carrying this virtual bubble along with us will even cause the notion itself of the individual to be modified. The roles of the virtual and the real also seem destined to change in his opinion, accompanying a “visceralization” of sensors and the digital, source of an increase in power of our sensorial capacities.

What is the Responsive Environments Group and what topics are you looking at?

At the Responsive Environments Group of the MIT Media Lab we look at how people connect to the nervous system of sensors that covers the planet. I think one of the real challenges for anybody associated with human interaction and computer science is really figuring out how people are transformed by this. The internet of things is something I’ve been working on for at least 15–20 years, ever since we referred to “ubiquitous computing1.” Now we’re looking at what happens to people after we connect to sensor information, in a precognitive and visceral way, not as a heads-up display with text or some simple information. How does that transform the individual? What’s the boundary of the individual, where do “I” stop?

We already see the beginnings of it now. People are all socially connected through electronic media, and they’re connected to information very intimately, but once that becomes up close and personal as part of a wearable, it reaches another level entirely. And what about when it eventually becomes implantable, which is as far as we can see right now in terms of user interface? Where does the cloud stop and the human begin, or the human stop and the cloud begin? How are you going to be connected to this, and how are you going to be augmented by it? These are fascinating questions.

What specific research are you working on, especially with regards to the built environment?

We’re doing projects that are definitely impacting the built environment and that are inspired by the changes in the built environment that technology provides. Beyond that, we’re also really interested in how people act in natural places in different ways. We did a project six or seven years ago controlling heating with comfort estimation, done by my then-student Mark Feldmeier. We built a wrist-worn wearable much like a smartwatch. It would monitor activity using very little power, so you could wear it for years before you had to change the battery. It also measured temperature and humidity every minute, and obtained location from the radio. Indoor location will be one of the next big sensors, so to speak, that’s going to roll out and transform our in-building experience. You’ll know within a few centimeters where people are indoors. That’s going to open up so much in terms of user interaction. In our project, we knew something about your local state because we were measuring these parameters right on the body. So, we essentially learned how to control heating, ventilation, and air conditioning (HVAC) based on the sensors as labeled by your comfort. You’re not controlling the HVAC like on a thermostat, you’re saying if you’re comfortable or not.

I think that’s basically what the future interface is going to be. We’re not going to tell building systems directly what we want; they’re going to infer our needs. At some point, we’re going to label whether we like something or not and they’re going to infer from that, and be able to bootstrap. This goes back to the pioneering work of Michael Mozer in the 1980s, when he had his house controlled by a neural net and switches were just doing reinforcement, essentially. We can take that to a whole other level now.

Before the smart HVAC project, we did a lot of user interface, wireless sensing, and wearable sensing, not concerned directly with the built environment. More recently, we’ve been focusing on lighting: for us lighting is intriguing because we now have control over any small group of lights or any fixture in a modern building. You can even retrofit a building with Bluetooth-enabled fixtures for lighting. But how do you interface to that? It’s not clear, it’s now a bit of a Wild West. So, we started projects that would label the light coming off the fixtures by modulation. If you modulate every fixture with a unique code, then you can see how much illumination comes from each fixture with a small, simple sensor. On our lighting controller, I can just dial my lighting as I want and it will be optimally using only the illumination it needs from proximate fixtures. It could be a wearable that I have on my wrist or eyeglasses that become my lighting control anywhere.

Can these innovations solve the problem of energy consumption?

In our tests, the smart HVAC had a significant effect on energy consumption, and it optimizes comfort as well as energy. Our current lighting controllers run off context. Knowing more or less what I’m doing, it knows the kind of situation I’m in and adjusts the lighting to be optimal for that. We’ve basically projected complex lighting into control axes optimized for humans. Instead of working with sliders or presets, the system can automatically adjust and converge pretty quickly into the lighting you want. I have a student wearing a Google Glass and the room illuminates automatically according to what she is doing. The lighting will change if she moves around or if she is in a social situation versus a work situation. It detects this, and will smoothly change the lighting to be appropriate. Of course, we can also optimize for energy consumption as well as satisfying contextual suggestions.

And now, it’s not just lighting: we’re also working with projection. Before too long we will have surfaces covering entire walls that provide dynamic video imagery. We now have large monitors, of course, and eventually we’ll have smart wallpaper. How do you control that to bring the right atmosphere into the room? We look at it responding to the individual, because we can measure affective parameters as well: Are you stressed? Are you relaxed? Are you into flow? What is your internal state? We can start to estimate that and have the room respond accordingly. The precise way we respond is different for everybody and can change—the system has to learn. But we discovered that it can learn sequences of images and lighting and bring you into a certain state that can be better suited to what you’re doing.

Can you tell us more about your work that involves nature and the living?

One of our projects relates to an outdoor location we called Tidmarsh. It’s an old bog that used to grow cranberries. These farms all moved north because of economic change, climate change, and change in the plants themselves. Much of Tidmarsh’s 600 acres have been turned back to nature. Rather than make a shopping mall or development, the owners really want to return it to what it originally was, a wetland. So, it’s been bulldozed, it’s been changed, and we’re interested in capturing this whole process. We built low-power wireless sensors for measuring the parameters of the wetland as it is restored and scattered them all over several locations to get fine-grain data which we’re now manifesting in a virtual world. Just like we do with the building in DoppelLab, you can now virtually go to this outdoor place and float through it, see the sensor information as it comes up through animations. We’ve got thirty microphones in part of the landscape, so as you’re moving you can hear the natural world. The sensors make music too. We’ve got three or four different compositions that are driven by the sensor data in real-time. We could do a city in that way too, in principle. So, this becomes a whole new art form, that is just starting to mature and which intrigues us very much.

Like a virtual city?

We call it, instead of sensor visualization, or sensor virtuality, sensor “visceralization.” It’s all kind of precognitive, hence visceral. We’ve built a framework where any kind of sensor can be used by any application, which is so important for the internet of things. You can also run it with VR headsets like HoloLens, Rift, or Vive. We are very interested in the idea of sensory augmentation. We’re going to start with the audio. If you’re looking across a certain area, we detect that you’re concentrating on something there, then we’ll feed up sounds from the microphones that are there, the sensors in the vicinity will be making some sonification or some music that will be blended in, and we’ll track your head, your location, and some idea of your sensory focus. In as natural a way as we can, we’ll see how much we can get away with expanded perception—what we call a “sensory prosthetic.”

Bibliography

explore

Vidéo

Justine Emard, Nicolas Bourriaud, Pierre Pauze

Vidéo

Artificial Intelligence in the creation process

AI is a new form of intelligence whose development is stirring up concerns and dystopian fables. Far from replacing human intelligence, AIs are emerging as new tools to be trained, controlled and shaped to achieve the desired result. For the artist, photographer, architect, film-maker, musician or illustrator, AIs become an agent with which to collaborate, resulting in co-creation. Inaugural lecture of the “AI and Creation” series at the Stream Innovation Center.

Discover
Vidéo
Vidéo

The art of artificial life

Justine Emard is a visual artist. Her installations use AI to understand the living, exploring the boundaries between organic life and artificial intelligence. Bee swarms, encephalographic recordings and prehistoric paintings become learning supports for algorithms that, contrary to dystopian imaginations, generate new supra-hyper-organisms.

Discover
Podcast

“ What will Paris be like under 50°C? How can we postpone this scenario and be better prepared for it? ”

Paris at 50°C

Alexandre Florentin

Podcast

“ What will Paris be like under 50°C? How can we postpone this scenario and be better prepared for it? ”


Paris at 50°C

Our dense, mineral-rich capital is ill-suited to the extreme heat we’ll increasingly have to cope with. So what adaptation strategies can we implement? This is what we asked to Alexandre Florentin, Paris councillor responsible for resilience and climate issues. He chaired the “Paris at 50 degrees” mission, which delivered its report a few months ago: what fields of action for architects and urban designers?

Discover
Podcast

“ Bringing the artist back at the heart of the city. ”

Podcast

“ Bringing the artist back at the heart of the city. ”


Story-telling as a meta skill

Claudia Ferrazzi, former government advisor on culture and audiovisual affairs, aims to put the artist back at the heart of the city by bringing together disciplines and industries. Founder of VIARTE, she uses art to support the implementation of new management methods. By adopting a narrative rather than a medium-based approach, she seeks to build bridges between the corporate world and artistic practice.

Discover
Vidéo
Vidéo

Which architecture for the ephemeral?

Eric Mangion has been director since 2006, after managing the Frac Provence-Alpes-Côte d’Azur from 1993 to 2005. His research has long been focused on the artistic gesture of disappearance – in all its forms, from the alteration of the work to the disappearance of the artist. At the Villa Arson, he develops a program around ephemeral practices.  He develops here the link between his research topics – the disappearance, the ephemeral, in the art and more generally, in the culture – and the evolution of the architecture.

Discover
Article
Article

Exploring methodologies to understand the living city

Theoretical experiments around the concept of the “metabolic-city” place living organisms at the heart of a new paradigm, encouraging a systemic approach. In urban and architectural practice, what tools are available to measure metabolism? Pauline Detavernier, Doctor in Architecture and Research and Development Project Manager at PCA-STREAM, examines existing measures of the life cycle and urban metabolism to outline a methodology.

Discover
Article
Article

Experimenting with environmental art

Using scientific facts as artistic material, Dutch artist Thijs Biersterker seeks to emotionally connect the public to global questions, to inspire a desire to take action. He uses technology, in particular AI, as a medium. His immersive installations highlight the intelligence and communication systems of plants: thus creating a bridge between living beings.

Discover
Vidéo
Vidéo

Nose to nose with the world

Never once has Sissel Tolaas, Norwegian artist, chemist, linguist and researcher, uttered the word “perfume”. For the past twenty-five years, she has been collecting, inventing and breathing new life into smells, refusing to be part of the world’s aseptisation. Through her olfactory installations, she invites us to sense reality, its geography and its temporality. Most recently, she has recreated the smell of Pompeii, between sensory experience and memory of the past.

Discover