Joseph Paradiso

Sensors "Visceralization"

Nan Zhao, Halo. Photo : Aguilera Williams

Urban responses to environmental issues are split between advocates of a return to/of nature and those who promote the technological solutions of the smart city, based on sensors and data. Joseph Paradiso, Director of the Responsive Environments Group at MIT, studies the interactions between individuals and computing technology. He explains how portable electronic sensors known as wearables allow access to a set of data that modify our experience of space and profoundly impacts the built environment. Electronic interfaces autonomously determine our needs, permitting the optimization of comfort and energy consumption. He sees a world of information becoming established in the real world, articulating wearables in real time with the general digital infrastructure. Carrying this virtual bubble along with us will even cause the notion itself of the individual to be modified. The roles of the virtual and the real also seem destined to change in his opinion, accompanying a “visceralization” of sensors and the digital, source of an increase in power of our sensorial capacities.

What is the Responsive Environments Group and what topics are you looking at?

At the Responsive Environments Group of the MIT Media Lab we look at how people connect to the nervous system of sensors that covers the planet. I think one of the real challenges for anybody associated with human interaction and computer science is really figuring out how people are transformed by this. The internet of things is something I’ve been working on for at least 15–20 years, ever since we referred to “ubiquitous computing1.” Now we’re looking at what happens to people after we connect to sensor information, in a precognitive and visceral way, not as a heads-up display with text or some simple information. How does that transform the individual? What’s the boundary of the individual, where do “I” stop?

We already see the beginnings of it now. People are all socially connected through electronic media, and they’re connected to information very intimately, but once that becomes up close and personal as part of a wearable, it reaches another level entirely. And what about when it eventually becomes implantable, which is as far as we can see right now in terms of user interface? Where does the cloud stop and the human begin, or the human stop and the cloud begin? How are you going to be connected to this, and how are you going to be augmented by it? These are fascinating questions.

What specific research are you working on, especially with regards to the built environment?

We’re doing projects that are definitely impacting the built environment and that are inspired by the changes in the built environment that technology provides. Beyond that, we’re also really interested in how people act in natural places in different ways. We did a project six or seven years ago controlling heating with comfort estimation, done by my then-student Mark Feldmeier. We built a wrist-worn wearable much like a smartwatch. It would monitor activity using very little power, so you could wear it for years before you had to change the battery. It also measured temperature and humidity every minute, and obtained location from the radio. Indoor location will be one of the next big sensors, so to speak, that’s going to roll out and transform our in-building experience. You’ll know within a few centimeters where people are indoors. That’s going to open up so much in terms of user interaction. In our project, we knew something about your local state because we were measuring these parameters right on the body. So, we essentially learned how to control heating, ventilation, and air conditioning (HVAC) based on the sensors as labeled by your comfort. You’re not controlling the HVAC like on a thermostat, you’re saying if you’re comfortable or not.

I think that’s basically what the future interface is going to be. We’re not going to tell building systems directly what we want; they’re going to infer our needs. At some point, we’re going to label whether we like something or not and they’re going to infer from that, and be able to bootstrap. This goes back to the pioneering work of Michael Mozer in the 1980s, when he had his house controlled by a neural net and switches were just doing reinforcement, essentially. We can take that to a whole other level now.

Before the smart HVAC project, we did a lot of user interface, wireless sensing, and wearable sensing, not concerned directly with the built environment. More recently, we’ve been focusing on lighting: for us lighting is intriguing because we now have control over any small group of lights or any fixture in a modern building. You can even retrofit a building with Bluetooth-enabled fixtures for lighting. But how do you interface to that? It’s not clear, it’s now a bit of a Wild West. So, we started projects that would label the light coming off the fixtures by modulation. If you modulate every fixture with a unique code, then you can see how much illumination comes from each fixture with a small, simple sensor. On our lighting controller, I can just dial my lighting as I want and it will be optimally using only the illumination it needs from proximate fixtures. It could be a wearable that I have on my wrist or eyeglasses that become my lighting control anywhere.

Can these innovations solve the problem of energy consumption?

In our tests, the smart HVAC had a significant effect on energy consumption, and it optimizes comfort as well as energy. Our current lighting controllers run off context. Knowing more or less what I’m doing, it knows the kind of situation I’m in and adjusts the lighting to be optimal for that. We’ve basically projected complex lighting into control axes optimized for humans. Instead of working with sliders or presets, the system can automatically adjust and converge pretty quickly into the lighting you want. I have a student wearing a Google Glass and the room illuminates automatically according to what she is doing. The lighting will change if she moves around or if she is in a social situation versus a work situation. It detects this, and will smoothly change the lighting to be appropriate. Of course, we can also optimize for energy consumption as well as satisfying contextual suggestions.

And now, it’s not just lighting: we’re also working with projection. Before too long we will have surfaces covering entire walls that provide dynamic video imagery. We now have large monitors, of course, and eventually we’ll have smart wallpaper. How do you control that to bring the right atmosphere into the room? We look at it responding to the individual, because we can measure affective parameters as well: Are you stressed? Are you relaxed? Are you into flow? What is your internal state? We can start to estimate that and have the room respond accordingly. The precise way we respond is different for everybody and can change—the system has to learn. But we discovered that it can learn sequences of images and lighting and bring you into a certain state that can be better suited to what you’re doing.

Boundaries between humans and technologies

Can these innovations solve the problem of energy consumption?

In our tests, the smart HVAC had a significant effect on energy consumption, and it optimizes comfort as well as energy. Our current lighting controllers run off context. Knowing more or less what I’m doing, it knows the kind of situation I’m in and adjusts the lighting to be optimal for that. We’ve basically projected complex lighting into control axes optimized for humans. Instead of working with sliders or presets, the system can automatically adjust and converge pretty quickly into the lighting you want. I have a student wearing a Google Glass and the room illuminates automatically according to what she is doing. The lighting will change if she moves around or if she is in a social situation versus a work situation. It detects this, and will smoothly change the lighting to be appropriate. Of course, we can also optimize for energy consumption as well as satisfying contextual suggestions.

And now, it’s not just lighting: we’re also working with projection. Before too long we will have surfaces covering entire walls that provide dynamic video imagery. We now have large monitors, of course, and eventually, we’ll have smart wallpaper. How do you control that to bring the right atmosphere into the room? We look at it responding to the individual because we can measure affective parameters as well: Are you stressed? Are you relaxed? Are you into flow? What is your internal state? We can start to estimate that and have the room respond accordingly. The precise way we respond is different for everybody and can change—the system has to learn. But we discovered that it can learn sequences of images and lighting and bring you into a certain state that can be better suited to what you’re doing.

With all these technologies, what is your vision of our daily life in ten years time?

I think it’s going to come back to what we envisioned in the early 90s already, when ubiquitous computing first came about at places like Xerox PARC, in Palo Alto, with people like Mark Weiser and all the early pioneers. They had the idea that computational infrastructure would become common—almost a socialistic principle—whereby we would share this infrastructure. In those days, you didn’t have mobile phones, monitors were precious things usually associated with a particular computer. I think we’re going to get into an era where the information world will be continuously brokered between wearables and infrastructure. Information will reach you in different ways—projected right into your eyes and ears or coming from monitors and speakers nearby. In this world, sensor data from everywhere will flow up, and context will flow down, to guide what you’re doing, or to guide the “digital” things that are happening in your vicinity. It’s not like we’re going to pull a phone out—I think there will be very little of that, where we run an app and then have to do stuff that diverts our attention. The world isn’t an app, and under ubiquitous computing users will always be adding on capabilities instead of downloading software.

It’s just going to be us doing what we do—the digital world is going to manifest in the right way around us, and we’re going to live under that pooled infrastructure. In a way it ushers in the dream of what early researchers in ubiquitous computing were after.

Machine-learning is advancing enormously—we’re only seeing the beginnings of it now. Context has always been hard, because the real world is noisy. Making a reliable decision about what you’re doing is tough. However, it has gotten better because we’ve got more information, more data. We are also better at learning algorithms, utilizing deep learning and related approaches and hardware optimized for this kind of thing. This is going to be leveraged far more. We’re going to be moving away from putting our fingers on screens.

And how will this impact the way we design houses, office buildings, etc.?

That’s a great point. I’m not an architect, but I suspect the whole notion of private versus public space is already changing. Look at a contemporary office space: people work in open environments, but I think people naturally also want to work in private environments too. There is a tension between both. It depends on what the team is doing and what the dynamic is. I think there are going to be different ways of isolating yourself in a public environment—wearables are one example. There will also be a revolution in connecting to other people in public and private environments—where other people can virtually be in your presence in many different ways, not just via a video conference. I suspect that this kind of infrastructure will change the nature of perceived space. There will be public displays conveying all kinds of information, both personal and aesthetic, related to the space. Lighting will be totally dynamic and everything will be networked.

Think of a building where you have a wearable computer. What is it going to be like? It’s an intriguing idea that people have explored in science-fiction and fantasy, but it’s not so far off now. Currently, we’re playing with a HoloLens from Microsoft and we’ve got an entire outdoor landscape manifesting on this table. You wear this thing, and suddenly you see this beautiful outdoor setting which you can virtually walk around, see sensor information manifesting on it, point to it and interact. This was in the realm of fantasy, but we’re building it now.

The roles for the virtual and the real are going to change. These constructs will be mobile, and you’re going to bring your virtual bubble with you as you walk around. The future definition of the individual is similarly intriguing. This will all probably affect workspaces and social spaces too. I don’t know exactly how it will roll out, but it will involve very creative architects—they can go a long way with this, I’m certain.

Towards a resynthesized reality

In that kind of environment, the boundaries of the living, where the natural starts and ends, become very vague.

This is going to be a major issue in the future. The world of information wants to have a presence in the real world in different ways, and we’re blurring that boundary a bit. But we’re not going completely virtual. Most of the stuff in VR really is VR, where you’re just in a virtual space and that’s it. What we do is to have the real world driving the virtual environment in real time. We call it the resynthesized reality, built on top of what we call cross-reality—it’s another idea of a distributed top layer that is resynthesizing perceived reality through the sensor data.

People will be growing up in this world. We’re getting to the point where we don’t need to pose a query to the web—it’s going to be driven by context and what the digital world, or the cloud, or the virtual world thinks is important. We’re basically going to augment humans at first via these techniques and resources. Some of them will be cognitive, some of them will be visceral or sensorial. Eventually, we’re going to transform ourselves and everything via sculpting DNA—who knows what we’ll do if we go far enough? There are other people working on those aspects here. That’s an intriguing future that is highly discontinuous.

Another technological dilemma we found in the philosophical literature is how far we go in geo-engineering or eco-engineering. What’s your vision of this issue?

We need all the tricks at our disposal because we need to be ready and looking at all the possible climate forecasts—we’re soon going to be at a point where we’ll be seeing significant warming effects. Eventually, if we spray the sulfates or whatever else into the stratosphere, it may relieve some of the symptoms of warming without huge side effects, although we don’t know the climate well enough to say for sure what will happen in detail. We have to make better models, do some limited tests of these ideas, see what’s possible, effective, and feasible. The danger is that we see it as a panacea—“Oh, we can just spray some stuff and then we can keep on burning fossil fuels”—that’s the worst possible outcome. We’ve got to get off of carbon-based energy, maybe use techniques like these to control temperatures in the near future. Then, if we knew how to pull carbon effectively out of the air at some point, we can fix this properly and regulate actual climate. We’re going to have to master the climate anyway someday because it goes through natural cycles. If humans are around long enough for that to affect us, I think we’ll have to be able to deal with it, unless we get to a point where we don’t care about climate, for example, if we have transcended into something else that’s climate agnostic. We are on the cusp of seeing what that is going to be. It’s an exciting time to be around because no matter what happens there will be profound changes and we are close to seeing them all out.

 

This article was initially published in Stream 04 - The Paradoxes of the living in November 2017.

order the book-magazine