Will Privacy Exist in the Metaverse?

·

·

On 3rd November 2021, EEF (Electronic Frontier Foundation) held its first At Home live-stream event discussing virtual reality (VR), surveillance, and XR technologies. They were joined by different experts in XR to discuss how augmented reality and virtual reality technologies may threaten our privacy while creating opportunities for unprecedented forms of surveillance. More discussions were held on the various opportunities being created through XR surveillance forms such as eye-tracking devices and the rights, safety, and data privacy risks associated with the same XR tools.

The rising uptake of virtual reality and augmented reality, collectively known as extended reality, or XR has been raised concerns about privacy and data security among consumers. With ads becoming interstitial in everyday lives, eye-tracking has come to make them even more real. According to Avi Bar-Zeev, eye-tracking will make ads almost indistinguishable from reality. This level of intimacy required in experiential advertising also requires some level of vulnerability to easily manipulate and weave the ad message into reality. This is where most concerns arise because personal emotions are triggered in the process and there is no clear differentiation between what data is shared or not. The panel discussed this further.

At Home with EFF
Screenshot: YouTube Live Stream

Credits

Speakers

  • Katitza Rodriguez: As EFF’s Policy Director for Global Policy, Katitza concentrates on comparative policy of global privacy issues, as well as government access to data at the intersection of human rights law.
  • Avi Bar-Zeev: XR pioneer and co-inventor of Microsoft HoloLens. Avi has almost three decades of experience in XR, formerly working with Apple, Amazon, Google Earth, Second Life, Disney VR, and more.
  • Daniel Leufer: Europe Policy Analyst at Access Now’s Brussels office and former Mozilla Fellow. He works on issues around artificial intelligence and data protection, with a focus on biometrics.
  • Kurt Opsahl (moderator): Deputy Executive Director and General Council at the EFF. Kurt is an experienced attorney who, in addition to leading EFF’s Coders’ Rights Project, is an expert in civil liberties, free speech and privacy law cases.
  • Kavya Pearlman: XR Safety Initiative (XRSI) Founder and Information Security Researcher. Kavya is an award-winning cybersecurity professional, previously advising Facebook and serving as head of security for Second Life.
  • Micaela Mantegna: aka “Abogamer” – Affiliate at Berkman Klein Center at Harvard doing research on video game policy, ethics, and immersive technologies for extended reality (XR). She also founded GeekyLegal, Women In Games Argentina (WIGAr), and is an ambassador for Women in Games International (WIGJ).

Transcript

Katitza Rodriguez: [00:00:00.45] Ok, holla, everyone. My name is Katitza Rodriguez and today I will be hosting a conversation with the amazing Avi Bar-Zeev. I think picking up works of beauty for over 30 years in the things I do for this on a digital twin that became Google Earth and in Second Life and recently the places and experiences [00:00:30.00] a lot of multiple companies like Microsoft for HoloLens, Amazon Echo, friends. He most recently helped Apple in mysterious ways to help to clarify the harms of ad driving business models, and has shown how our right to privacy is something for that matter. Human rights. I find this even the existing limitations of the system very compelling and piece [00:01:00.00] about its potential to open new forms of entertainment, art and storytelling, a new way for defending our rights online. But whenever I dig in this topic, I’m concerned to hear researcher stuff so happy and enthusiastically about collecting more what is called ecocentric data, as they call it, about users, about our attention, our emotions, even through our very movements. And, of course, [00:01:30.00] inferences that can be drawn for all this data. So I have been concerned about how to make sure that these technologies get privacy. While I do think that the way should only be about whether inferences can be done accurately, but also should open a discussion about whether we should take some of these correlations at all. So I thank you so much for taking the time to join us today and share your knowledge with us. There are a few questions I will make today to you. So the first one, there [00:02:00.00] are several different acronyms that get used in this space. VR, XR, extended reality or digital XR. Can you explain what XR means?

Avi Bar-Zeev: [00:02:15.15] Sure. Thanks. Thanks for having me. You could hear me, right? Yes. Ok, good. So XR is a term for the folks in the industry came up with because a lot of the other terms were confusing and we have a problem in this [00:02:30.00] field with companies coming and taking the names that we all use. You may have seen that happen recently. So we used XR as a placeholder term to mean any reality, whether it’s AR, VR, MR. And that’s after having some of these things happen, like mixed reality used to be the whole continuum with a certain company started using MR as as their name. And even with XR, a certain other company has tried to call that extended reality and b that doesn’t mean anything. I don’t know what [00:03:00.00] extended reality is, so I just like to use the term XR. And if you want to be really clear about it just to give it like R X for drugs, you know how you have the R and the Greek letter chai or chi, just think of it as as that in reverse. And that’s that’s what we call just the whole field. It doesn’t matter. It’s not. It’s not going to be that popular, but it’s what we call ourselves mostly.

Katitza Rodriguez: [00:03:23.69] Let’s put things in context. What’s at stake for privacy in the country in the near future? [00:03:30.00] Will people be in control of their life? Do you think?

Avi Bar-Zeev: [00:03:35.54] I think in the short term, yes. But decreasingly so with this technology. So I hear a lot of people say, you know, who cares what some company knows about us? And they give away their personal information for free just for a little bit of convenience. And and that’s the norm. And I think it’s because people don’t see a lot of great alternatives. Privacy feels a little abstract, kind of like saving for retirement. But what matters is whether these companies can [00:04:00.00] use our personal data to manipulate and maybe even control us. And I know it’s hard to believe and some people dismiss this as mind control raise. And and, you know, we like to think we’re irrational beyond such influence. But the truth is, this stuff never really worked out well in the past, so it’s easy to say, Oh, it just doesn’t work, they’re just making it up, but it’s going to work really well in the future. And that’s that’s really what I’m here to talk about because I’ve already worked on research like this. I’ve already worked in a space for 30 years, and I can see a clear [00:04:30.00] trend of the profit motive and the potential for the technology coming together. And that’s what I really want to talk about. So I want to be clear that a lot of the stuff I talk about doesn’t exist today, except in research. But as devices are rolled out that have things like that tracking, there’s going to be more and more common. Let’s flip back and just at the history of this, we know, you know, TV ads didn’t really work well.

Avi Bar-Zeev: [00:04:52.61] I mean, they did somewhat, but you could imagine some of these ads being, you know, emotionally manipulative. They would play on [00:05:00.00] our emotions or sensitivities. But how well did they really work? I mean, how many people really think drinking Pepsi makes them more likable or buying Duracell batteries makes them feel more connected with their kids? That’s what the commercial is selling us on. And people don’t necessarily believe it. But on the other hand, if those commercials didn’t work, they would probably be doing something else, right? So there’s nothing to it. It does have an effect on us. And that’s that’s before personalization. And so once the system starts to know who we are really well, we’re going to see we’re going to see a whole [00:05:30.00] new level in kind of think of it as the advertisers are kind of like professional gamblers, they’re betting on us, they’re betting on our behavior and they don’t control the cards, but they can make small bets. And over time, when more and more money. And for companies that are like the house, the people that are the places where bets are being placed, they don’t have to win every, every hand either. They just have to win over time. They have to win by percentages. We’re going to see more and more extreme over time with personalization [00:06:00.00] down to the individual level. Right now, it’s still bucketing. But but we’ll talk more about what happens when you get down to the individual.

Katitza Rodriguez: [00:06:08.03] Thanks. We continue with a question on privacy. There’s a whole discussion on how XR can know how we may feel and what we are interested in and whether or not it really works. I know that several sensor types that could be used for this to take maybe the best known of these. [00:06:30.00] What is eye-tracking technology and what may your privacy concern in this space?

Avi Bar-Zeev: [00:06:36.58] Sure, it gets our talking in one second. One thing I need to explain, I think, is that right now ads are very interstitial. They show up on top of web pages. They show up between videos and ads in the future are going to be at part of the world. It will be indistinguishable from the world. So it’s what we might call experiential advertising, and the manipulations can also become very [00:07:00.00] subtle and woven into the fabric of a reality. We won’t even notice them, mostly because we have to believe the reality that we’re in. We can’t function very well if we’re questioning the very nature of reality or questioning what’s real or what’s an ad or what’s trying to manipulate, this is sort of a level of intimacy that we need that requires some vulnerability. And we’re really lacking the defenses when it comes to that, especially when our personal emotional triggers become known to the system. The algorithms that can push the buttons are the ones that can get through our rational responses. And [00:07:30.00] so, you know, we’ve talked a lot about this in social networks recently with how the most enraging content makes her the most engaging content on social networks. And that’s one example of our being personally engaged. But eye-tracking, I think it’s important to say, has a lot of positive uses, right? It doesn’t.

Avi Bar-Zeev: [00:07:46.36] It’s not an evil technology that’s just showing up to take over, but it can be negative ways. So some of the positive ways are to increase the performance of the devices or to help create operating systems in the future that know [00:08:00.00] what we’re trying to do and help us actually accomplish our tasks. But because of the way our eyes work and a long article describing the details if you’re really interested. But because of the way our eyes work, this becomes a window to our mind. Our eyes show what in the room are we thinking about? What do we notice? How do we feel about it? Or are we excited about it or bored? Pupil dilation is is an autonomic reflex that shows when we’re excited about something and having cameras captured that tells the viewer that we feel excited about something. [00:08:30.00] And basically the combination of that with forward facing cameras like you’ll see on the Ray-Ban glasses or on, you know, Facebook’s project Aria or with their ego deflection that captures what we’re seeing in the world and then the cameras capture where we’re looking. The combination of that can tell the computer pretty much what we’re thinking about relative to the world that we’re in, not our abstract thoughts, but are real ones. And so the danger is extremely high in this data, and I don’t think most people understand [00:09:00.00] how much, how important this data is, and I don’t think anybody really can knowingly consent to all of the deeply personal insights that can be gleaned.

Avi Bar-Zeev: [00:09:09.01] Just having the ability to delete your recordings before they go into the cloud is not enough to guarantee the safety. It’s also possible for this technology to learn how we feel about each other. Just think about the glances that people make at each other when they feel certain ways positively or negatively. If a computer is watching those glances, the computer can learn pretty well how we feel about each other and build [00:09:30.00] up that model without us ever even expressing our social network and who our friends are. So just give you an example, and one person that I put through eye tracking, we learned and we know none of us realized this by working with the person for a year. But we learned that he looks at your mouth when you talk and not your eyes. Most people will look at your eyes, but he looks at your mouth, which you know, might be the case of someone with lip reading. But he wasn’t hard of hearing, but it might have indicated exactly that. He might have [00:10:00.00] had some cognitive differences. He was maybe neurodiverse and in terms of how he processed speech. That’s an insight he never had, and we learned it just by looking at his recordings.

Avi Bar-Zeev: [00:10:08.92] And I was very conscious when I was wearing eye tracking glasses of being careful. What I look at don’t make the wrong look or gesture because it’s all going to get recorded. It has a very chilling effect on how we interact as well, it’s going to be something that we all have in our devices. It’ll be built into the glasses. The first-generation immigrants don’t have them, but the project already has already already has [00:10:30.00] it or looks at having it and other devices will have eye-tracking as well within the next one to two years, you’ll see. I could hear you. The stream cannot hear you. [00:11:00.00] Well. It’s. Yeah, I said a paper I collaborated with Britton Heller is [00:11:30.00] a very well-known human rights lawyer and author. And I think the idea was to cover the history of advertising and to show how it’s projected into the future. And one thing to look at in that regard is Super Bowl ads, right? I know people we probably all know people who only watch the Super Bowl for the ads because there’s a lot of production quality, high value and they tend to be more dramatic and more interesting. But they’re not watching for the sports, they’re not even tuning in for the sports at all. I think what we’re looking [00:12:00.00] at in the future is that that the advertisement becomes so compelling and so much part of the world that we we like it.

Avi Bar-Zeev: [00:12:06.24] We should not. Do we do we hate it? Or is it bothersome? It’s it’s actually maybe we like it too much, and maybe it’s actually affecting us a bit too much. And I think what we’re going to be seeing with that car, especially more on the VR side of it than the AR side is once the world can also be changed based on our our personal graphs, the information that the companies collect about us. You could start [00:12:30.00] to see the world tailored to match what we really want and need, and that’s something maybe a good thing. But when it’s used against us, it’s not so good. So there’s a few things about eye tracking that are important to understand. One is, you know, even just without I just normally the way our eyes work, will you blink or when you move your eyes rapidly, you’re actually blind. You can’t see the world and our brains just fill in this continuity of the vision. You can actually change the world while we blink. You can change the world with eyes, move around and [00:13:00.00] you won’t even notice that the world has been changed. It’s been. It’s been used in research for some really impressive results.

Avi Bar-Zeev: [00:13:05.88] One of them is called redirected walking, where you were literally walking in a circle, but you perceive yourself to be walking in a straight line because the world keeps changing around you as you move. So now imagine that tied to product placement. So the elements of the world change every time we might blink or look around and gauging that and matching that to our attention and what we’re interested in, the world can now be optimized to show us what [00:13:30.00] the advertisers want us to see so that we will look at their content and we will be more persuaded by it. But I think even more importantly, eye tracking allows this experimentation to happen. What things catch our eye, what things make us emotional. And so it’ll be different for everybody. This is this is where TB and future prioritize and really diverge. The differences between us. So for example, you may have one person who’s hot button issue, maybe politically is abortion, and they were willing to [00:14:00.00] vote for candidates entirely based on that one issue alone by being able to push that button. The advertisers in the system are going to be able to trigger somebody to be emotional for a period of time and then might even be trying to advertise about that particular political topic. They may be trying to just get somebody worked up so that the next thing that they see, maybe something that they don’t look at with this rational filter that they might, you know, and all of a sudden you wind up seeing people like today saying, I I’m against CRT, but you ask them what it means and they can’t tell you they’re [00:14:30.00] busting.

Avi Bar-Zeev: [00:14:31.23] The buttons have been pushed emotionally. And it’s a it’s a mapping to what their sensitivities are and what their issues are. So how far can it go? It’s kind of. It’s unclear right now. It’s it’s only been tested in early phases. We’ve seen some examples of this being able to work. Definitely see an example of our track and be able to show the emotional responses and being able to track what people are interested in. But the projects that are going on today are the data collection projects that will enable the experience [00:15:00.00] for the future. That’s what we kind of have to think about. Like all the stuff that’s happening with Eagle Ford and Facebook and Project Area, it’s all been collected so that some companies can learn what to do, that they’ll know what to do, and they have a little bit of time to respond to that before it’s put into action. So now is, I think, the time to understand all the potentials of where to go. Um, how about them with others?

Katitza Rodriguez: [00:15:26.12] So you got a lot of attention last week for remaining itself with a reference to [00:15:30.00] the science fiction concept. What’s your understanding of the MetaVerse. Also, Avi Reddy told me that the public cannot hear my questions. Do you mind repeating my questions for the audience at which?

Avi Bar-Zeev: [00:15:44.57] No problem. So the question was what is the metaverse? And it’s a relative to Facebook. And so I guess the short answer means I’m running over a little bit, I guess, is it means something different to every person. It’s had at least three by seven major definitions over the last [00:16:00.00] 30 years, and for some people, it’s about Ready Player One like worlds, maybe just on tablets and phones right now. For other people, it doesn’t even need to be 3D at all. They’ll still call it the metaverse because it connects society and mirror worlds like Google. And for some people, it is more about distributed control. It’s more about NFTs and things like that. So it really depends on who you ask. For me, I’ve been focused [00:16:30.00] on AR, so the one definition that concerns me the most is when people start saying AR as the metaverse or as is our way of interacting with the real world in the future, we can save that for another day. But the thing that Facebook always needed to do that they hadn’t done before, strangely enough, even though their social network is, they never really had people on the website.

Avi Bar-Zeev: [00:16:52.28] Think about it. You saw you saw icons and videos and text and pictures, but the actual people were not [00:17:00.00] in the website. The people were remote. So you only saw the artifacts of people. And so the thing that explains the most, I think why is doing this, apart from everything we can say about all the trouble that delves into in branding and all that, the real reason why the Metaverse matters Facebook is they need to get over this problem. Their entire premise is they’re connecting people, so they need people to actually be in the space together, and they haven’t done that yet. So all this stuff is about really bringing people, and that’s when Zuckerberg talks about embodied internet. That’s what he’s [00:17:30.00] really saying. But honestly, I don’t think the future of work is having virtual meetings and boring conference rooms. I mean, that part of it seems to be crazy. What’s the point? I mean, think about the future should be how to avoid having boring meetings in the first place. Just have to do them virtually.

Katitza Rodriguez: [00:17:44.21] Oh, OK Avi. And I think that’s our last question. What about the glasses you have worried Web3 inventing wearable devices for augmented reality. How far has this gotten? What is likely to go next and how will this impact our [00:18:00.00] privacy rights?

Avi Bar-Zeev: [00:18:02.91] Yeah, so I’ll try to go real fast on this one. So the glasses are in progress, it’s a very difficult technical challenge to be able to make glasses want to wear. First of all, none of us really probably want to wear glasses, but we do. We have to. But stick glasses that you would be willing to wear outdoors and social settings is incredibly difficult. Their faces are very sensitive real estate for putting anything. And even in the future, we may have contact lenses, but not everybody wants to wear contacts, either. They do, because they have [00:18:30.00] to. So, so you have to be. These things have to add a lot of value. They have to be extremely good before people are willing to actually adopt them. The challenge with everybody wearing glasses and beginning now with glasses that simply record other people is there’s a lot of privacy concerns with people being recorded without their permission in public. I think we need to really have a discussion about about a permissions framework, which people can actually say whether they want to be recorded or not [00:19:00.00] out in public and in private. And of course, there’s important issues like we want to you want to make sure that we capture wrongdoing. What happened with George Floyd’s murder? We don’t want to. We don’t want to regulate that out of out of the toolset that we have for combating authority and abuse. But we also want to make sure people have the permission to be able to say, I want to be private right now when they’re not doing anything wrong. Eye tracking is going to be a huge thing.

Avi Bar-Zeev: [00:19:23.67] As we said before, the combination of these cameras capturing people that are tracking these companies that are building these devices [00:19:30.00] if they want to, can open the world in a map of all the people and a map of what we’re all doing. And that has a chilling effect on how we behave. This is this is where privacy really impacts all of our human rights because the more we know we’re being watched and the more we know we can be tripped up for anything we did wrong, whether it’s being canceled socially or whether it’s being arrested for not stopping perfectly at a traffic light or a stop sign. All the things that represent how we engage with the world become under extreme scrutiny when [00:20:00.00] we are recording everything and when we’re recording each other all the time. And that’s one of my biggest concerns, right? There’s the reason we have the third and fourth amendments to the Constitution. I’m not a lawyer, but but you know, at the time, we had soldiers being quartered in people’s houses and we said, Oh, we don’t want the government invading this. But I think we need to also say we don’t want big companies invading our privacy because it will kill our ability to be ourselves to to have our freedom of thought, to have our freedom of association and speech. Even the privacy [00:20:30.00] is is the first line of defense to make sure that we can be ourselves truly. So the glasses are really starting to impact that. And I think now it’s time to set the policies to say what we can and can’t do.

Katitza Rodriguez: [00:20:42.00] Thank you, Avi. It was very interesting. I think we are short of time right now. We need to go to the next panel,

Avi Bar-Zeev: [00:20:49.47] But I don’t

Katitza Rodriguez: [00:20:50.88] Really have any other questions. One question from the audience, if not, then we can move to the next panel. Thank you, Avi.

Avi Bar-Zeev: [00:20:59.17] Thank you. Thank [00:21:00.00] you. Yeah. So no questions from the audience. We’re ready for the next panel with Kurt when you’re ready.

Kurt Opsahl: [00:21:15.78] Ok, everybody, and let me invite our panelists up to the stage. We are fortunate to have a great panel with us today and we’re going to talk about surveillance in XR and [00:21:30.00] all forms of the VR AR XR. As I said, what no matter what you call is a generalized term, this has come a lot to the fore lately, with Facebook’s adoption of the Metaverse as the future, envisioning a future in which people will be doing a lot of things in virtual reality. And then we also have the possibility of a future with lots of AR where people will be wearing glasses. It could potentially have cameras, microphones [00:22:00.00] that will see and interact with the world around them. And these create a lot of issues. We want to make sure that we don’t know which way a future in which we can take advantage of the cool features of these new technologies without giving up these fundamental rights to privacy to free expression that we have existed, that exist in our current world. And all of these things are challenging, especially surveillance, because in order to be part of a virtual world or to fully interact with [00:22:30.00] a VR world, there is going to be a lot of data that is being collected sent through centralized servers transmitted and all for the people who are here in this room. Information about what you’re doing and saying is going to hold space and then coming and being distributed amongst all the other people here. And this is a cool technology for meeting and talking about it, but also creates a lot of possibilities of surveillance. So I should introduce myself. My name is Kurt Absol. I am the deputy executive [00:23:00.00] director and general counsel at the Electronic Frontier Foundation, and I’ve been working on our VR XR issues. And then let me turn it over to our panelists to do a quick introduction to themselves. So let’s start with Daniel.

Daniel Leufer: [00:23:16.51] Think. Hey, can you hear me? Yes. My name is Daniel Leufer. I’m a policy analyst, Snow’s office in Brussels, in Belgium, in Europe and access there is a global human [00:23:30.00] rights organization that works to protect the digital rights of people at risk all around the world. I mostly work on things that fall under the level of artificial intelligence, but I’ve also been doing some work on the. And yeah, especially thinking about how the, you know, this can be the next frontier of some of the things we’re seeing today.

Kavya Pearlman: [00:23:57.75] Hello, everyone in [00:24:00.00] this space, in virtual space with you all over again, this is great to see that EFF is taking this step to really get into XR in the real world I’m Kavya Pearlman – I’m the CEO and founder of XR Safety Initiative Z XRSI. We are. Our mission is to help build safe and inclusive XR ecosystem [00:24:30.00] used to say it’s our existence, but now it’s our equals. One of the keys would be the Metaverse, which we will talk about, of course again. And in order to carry out these this mission, he identified few focus areas, such as diversity and inclusion. So we have a dedicated niche that focuses on those aspects. We have a medical council because these realities, [00:25:00.00] our economy is at risk. So basically, our council focuses on data protection and privacy. They give way more medical. Then we also have a trustworthy media platform. Oftentimes, we conduct some immersive podcast, and we also are working on educating more and more journalists how to use these realities to. And finally, [00:25:30.00] we have a child safety initiative. Very recently, we helped contribute to both reform and really continue to do so as the law making and policy globally, we’re trying to make sure that these new technologies are encompassed. And so all of that. And this is my passion. I’m by profession, security professional, whatever just happens [00:26:00.00] that brings up this. So I’m always here to listen to you and share knowledge and with you. Thank you.

Kurt Opsahl: [00:26:21.65] Thank you, thank you Kavya. Michaela would like to introduce herself.

Micaela Mantegna: [00:26:26.40] Hi, everyone, can you hear me fine? Yes. Okay. [00:26:30.00] Thank you so much for the invitation. It’s an honor to be here, and I’m so happy that we are discussing these very relevant topics. And Micaela Mantegna, gamer and I wear many hats, the one that is most probably relevant for today is that Affiliate at Berkman Klein Center at Harvard doing research and video game policy and particularly in. Um, something [00:27:00.00] that I started to talk a lot about last year, which sounded like something from the from the future, and now it’s like everyone’s mouth. Most of my work has been related to artificial intelligence, ethics and intellectual property. And my concerns about this is how these regulations interact, these kind of enhanced work. [00:27:30.00]

Kurt Opsahl: [00:27:35.10] Well, thank you. So let’s kick things off with with a central question. What are the issues? What are the issues with XR are raising for surveillance, which are different from or go beyond the surveillance issues we have with traditional audio and video communications? feel free to jump in.

Kavya Pearlman: [00:27:57.46] Yeah. As [00:28:00.00] you know, you always talk about issues in terms of conflict. And I think that is the term we should really anchor our conversation in because I mean, these are the hands here. How many of us really want to live a reality, of course. And it’s not just like some remote watching. It is exactly what earlier I’ll be Avi if this is [00:28:30.00]eye-tracking because you track if we really want to think about. And that’s, you know, you say, the best actor on of of. I don’t know if you want to unpack that term and then definitely that’s one particular issue. Be, constantly. And then there is data collection, massive data collection surveilled at all times to [00:29:00.00] the point of being manipulated. I don’t know if we want to have that in our daily lives.

Kurt Opsahl: [00:29:14.45] Do you want to jump in?

Kurt Opsahl: [00:29:16.79] Please.

Daniel Leufer: [00:29:18.35] There’s a lot of issues I think we could just stay on the first for a couple of hours. But I think, you know, coming at it from the perspective of my work, a lot of the work that I do as a biometric [00:29:30.00] surveillance campaign to try to ban the use of biometrics, things like facial recognition in publicly accessible spaces. And at the moment, you know, that’s being done by law enforcement agencies, you know, going retroactively and CCTV cameras. But if you look at something like AR glasses, you know, in the future, you’re going to have the possibility where we’ll will be able to run all sorts of, you know, facial identification to you just walking in a public space. You can identify [00:30:00.00] people on your glasses to all sorts of crazy applications that in some cases are maybe pseudoscientific. Like there’s companies out there who say they can predict if someone is a criminal or a potential terrorist based on their face, the length of their goals. This is really stuff we don’t want people to be doing in the human rights frameworks or, you know, established stuff. And so without regulation or without being responsible, the potential that that opens up with having [00:30:30.00] these cameras on everyone’s faces with having these interfaces on all sorts of crazy is really frightening.

Micaela Mantegna: [00:30:40.26] You. And if I could jump in, well, I think we still have huge issues on social media and we are kind of putting this into to this immersive worlds. The other layer that they are going to collect data that [00:31:00.00] is more intimate to us than it was saying and algorithms are at play. They can create inferences. And one of the things I always challenge about this is that those inferences are mostly secret to us so we can go around kind of invisible layers about us, about what the algorithms think that we do. And there’s no way that this cannot be abused. If we think about the promise of [00:31:30.00] social media and our images and our pictures online and how it has been abused just by so many companies, not the least, Clearview AI started to take this huge information and sell it back to law enforcement. We think that we are now going to have now devices that are really close to our body. And as Avi was saying, taking these kind of the [00:32:00.00] things we are looking at and our policies, because if you saw at that conference, I wish to say in Facebook, I can’t share that you weren’t in that they were talking about the future. They were hinting at Twitter about devices that are going to track our mental pathways just by how we think about moving our hands. And that comes back to intel privacy and. That [00:32:30.00] one of the fundamental principles of criminal law is that you cannot be charged or you cannot be prosecuted for what you’re thinking, and there’s no way that we can just believe the promises is not going to be abused, particularly with and we have these precedents.

Micaela Mantegna: [00:32:50.37] So I think it’s time to kind of get together and start putting some real questions into these promises [00:33:00.00] of this. I saw some of the kind of the recorded version of the world. And also they are trying to sell so forceful version of reality. We need to port things of essence and the way that we are seeing the world is what we were seeing. The last conference is like creating a space where can be just free and [00:33:30.00] change your avatar and change your soul and adopt another skin. They are trying to put reality into a virtual world with us and kind of makes any sense to the concept of the should a metaverse should be. So it’s really time to talk about the centralization and not the least talking about the centralization, because one of the things that for me is is kind of really a big deal is how intellectual property regulation is going to pay [00:34:00.00] into play into this whole, these kind of things abuses that we are seeing on automatic content moderation. I’m going to be bought into these very intimate creation of content. As Avi was saying, we are going to see real time creation of content thanks to generative artificial intelligence, and that is going to be created by data that was safe previously from us without noticing, without noticing. [00:34:30.00] So there is a lot of layers of things that are replicating the issues that we already see social media with.

Kurt Opsahl: [00:34:41.58] A number of

Kurt Opsahl: [00:34:44.61] People can not hear you properly voice. Ok.

Daniel Leufer: [00:34:52.14] I do a lot, a lot of really good issues raised there, so we have a massive data collection, especially biometric data collection, to [00:35:00.00] enable the conversation and also the potential inferences or secondary data derived from that inferences about what you are thinking, what you are in favor of or opposed to what excites you. What does not? And these all create additional privacy issues that are harder to do in a more traditional video audio environment. The issues that we just discussed the panopticon. This is a term derived [00:35:30.00] from a prison design in which there would be a single goal that would be able to view inside a cell. And the notion was that therefore a single guard could be looking at you, and this would inspire better behavior for people. They were being watched. And this is one of the concerns that comes out with a knee extended reality environment where you are being watched in a way that you’re not in a physical [00:36:00.00] world. So this brings us to the important question how do we move forward? How do we keep the promise of this technology and take advantage of it without without creating up an optical?

Micaela Mantegna: [00:36:20.88] I can kick by saying that for me, decentralisation is key because one of the things that we should have for [00:36:30.00] an accountable metaverse is different. Companies working on that and different people work in that and even governments working in that as well and have presence there. Because if we are, we have seen this tension between centralization and decentralization. And when a company tries to be this kind of place that encompasses everything that is going to be so hard to to take accountability, responsibility for it. For me, the keys have credibility and decentralization. [00:37:00.00]

Kavya Pearlman: [00:37:03.75] So I want us to zoom out a bit. You talk about, you know, we want to solve these issues. We did task ourselves to solve these issues back in 2018 19. And as soon as we began, the first things that we encountered was What the f is XR? So it brings us to. First thing is, are we speaking the same language? We [00:37:30.00] also heard the word mentioned biometric data. We heard inference. We heard a lot of terminologies being dropped by sleeve. The one thing that we must do is first collaborate, come together and then create consensus around taxonomy. One of the efforts that we kind of, as I did and I’ll use it as an example, is biometrically inferred data. And that’s what we can potentially [00:38:00.00] use that term to make better laws that take the data capture regulations or the privacy regulations beyond and PII, which is a personal identifiable information. So I think one of the major issue that we’re going to confront is just like, you know, Meta is not touting metaverse, but has anybody taken the time to standardize it? And that’s kind of the active activity that exercised [00:38:30.00] busy with lots of stakeholders and partners is first, we need to create consensus as to what exactly are we trying to address? What does decentralization in terms of extended reality? And what is it going to look like? Are we talking about creating virtual worlds that will be based off of blockchain? Are we talking about self-sovereign identity, for example? And just because it’s on the blockchain doesn’t [00:39:00.00] mean it’s secure, but what kind of implementation and standards are we going to adhere to the one? We’ve created those standards that we need to go to.

Kavya Pearlman: [00:39:09.31] The step two, which is already underway again, is understanding the context of the data because in certain contexts, I would love to share my data. And in some context, I wouldn’t. I do like to shop. I would want some companies to cater [00:39:30.00] to me, specific shopping and whatnot. But in this context, I don’t want my medical data to be in the hands of developers to be using those inferences and then sharing it with some insurance provider, at least in America. You could potentially be denied coverage for issues that we may not even know that exist. So I think there is like a process that we need to follow or we create consensus. We understand the context of the data and then we kind of create these guardrails, [00:40:00.00] which is basically why are we coming together access now, EEF, XR and even Meta, like I would say, without involving and engaging all of these companies, it is impossible because they hold all of this data and they hold the rules of the game. We can help them. We can inform them what they should do in terms of self-governance because regulations will come in and say they need to educate them.

Kavya Pearlman: [00:40:29.87] The [00:40:30.00] third part is a lot of the lot of the people are talking about Facebook Metal, Facebook Meta. Has anybody asked, Where the hell is Apple in all this? We have not once heard about what is going on at Apple, and I really would like to point out that black box needs to be unpacked because they can’t just one day get on the stage and unleash all this incredibly sophisticated technology on all humanity. And we’re basically just sitting there and shouting at them just like [00:41:00.00] they did with the system stuff. What I’m talking about is the, you know, try to do the breaking the encryption in a way or scanning the client side, scanning for child sexual content that is not acceptable. So while everybody kind of starts to demonize Facebook, I’m saying we need to get it there with all the stakeholders, regulators, everybody create consensus, understand the context and then start to put the guardrails together, [00:41:30.00] not just regulation but self-governance, and also have this sort of shared responsibility because people can give you choice. Like in reality, right now you have a lot of controls, but you have to press the button to view that harasser who is going to still come up to you and say horrible things. We have to shift some of this responsibility and share it amongst us.

Daniel Leufer: [00:41:58.15] I would just not in that. There’s [00:42:00.00] actually a huge amount that we can do with existing data protection regulation and data protection principles that we already have things around data minimization. I see the same problem for the artificial intelligence doing you. You hear things from companies like, Oh, this is so sophisticated, regulation can’t keep up. And then things like data protection regulation get in the way of artificial intelligence. And it’s like [00:42:30.00] data protection Regulation X right and artificial intelligence applications written them. So that’s why there’s a conflict. That’s not regulation. That’s the problem. I mean, some of these applications and you know, I think if you look at basic minimization is, I think, applying got to X or, you know, playing that to VR is key. Like there, as you said, you know, there are certain contexts in which I will want to give my tracking data and which I will want to give, [00:43:00.00] you know, quite intimate data because maybe I want that. I want to have a really depression like avatar that’s full of expression in certain contexts. That’s what I wanted for. That’s it. Its purpose limitation. It’s just for that purpose. And if that’s respected, there’s actually no issue. So there’s these basic, well-established principles. I would solve a lot of these issues and companies are just not following them and coming out with this rubbish that, you know, this is a completely new sphere. Regulation can [00:43:30.00] keep up, and that is simply not true. There are cases, of course, where we really do need to adopt new developments. But I think a lot of these baseline issues that we’re encountering are really easy to solve with the tools that we’ve already got.

Kurt Opsahl: [00:43:44.44] Early. Overview.

Kurt Opsahl: [00:43:48.13] Oh, yeah, absolutely, I think it is a lot of these questions are interesting questions that have been addressed in other contexts, and there’s a question of how to. Character [00:44:00.00] privacy to new technologies, which may be gathering more information or using it in different ways and try and do that, do that sensibly and at least my hope is that by giving people more control over their data, how it is transferred, more transparency, so they have an understanding of what it is that they’re agreeing to. We can try and preserve privacy and civil liberties in a in a world. Or challenging one of them just looking at. I. [00:44:30.00] Are not participants who are. You may be walking by. That could be have their faces analyzed, but have this technology applied to them? And it’s very difficult to go through a mechanism to get their scent or to to have them even necessarily be aware that they are participating. But it’s a very important part of our society that we’re able to walk out in the public sphere and used to a notion where people will see us. But perhaps forget [00:45:00.00] about us in a few minutes that we will have a little privacy. Or ubiquitous AR glasses might change that, that paradigm. So how do we how do we address the privacy issues of bystanders?

Kavya Pearlman: [00:45:17.78] Well. Great question. And I think, again, I would go back to that same formula. Let’s start with Texas. We said by-stander, we need to divide it further, need to [00:45:30.00] divide it in what type of bystander situation is going on. Well, one is they coexisting in a space high standard and especially for virtual reality users. We share a same physical space or virtual space, and we interact with each other. The part that thought that about knowing, well, if somebody’s just giving you a demo, you know, that’s especially applies in education field. Many times people are just going to [00:46:00.00] students. Researchers put them in VR that I refer to that because we need to create a culture of, you know, virtual reality is a real experience for some people. And the way you touch the you, if somebody falls the way you touch, that could really trigger, like, you know, Rassmann PTSD. There is this other cultural aspect of bystanders that we need to address. And then there is the interruption. [00:46:30.00] It’s about yesterday the quest to version three for, I believe, the bystanders interruption. And we’ve all seen this. I don’t know if you guys have seen this firewood would is this kid? And I think I just think on my table, but us to gives you this basic sense. While we know what space says does, it now allows you to view if somebody is interrupting in your guardian. [00:47:00.00] But have we thought about how it’s operating? What other data is being collected while you look around and what haven’t been given the technical specifications and the details of what what that does to are and what protocols are being used? So.

Kavya Pearlman: [00:47:20.17] Once we create this consensus and we move on to like, OK, how do we understand the context because especially in the AR glasses is [00:47:30.00] going to matter? The only way to address this is to have the contextualized AI, which will tell us, Hey, I want to be recorded in my living room. That’s OK. I have the AR glasses. I agree to the terms of services, but they need to prepare us with a mechanism that would not allow. Just like Roomba doesn’t go beyond the image I define in my room. It shouldn’t record, but I’m in my bathroom and that is not possible unless we have contextualized AI. [00:48:00.00] And so we need to give this data, but we need to create those type of understandings and the basically a common understanding of what the game and then those mechanisms need to exist. And finally, you know, coordinate with regulators and everybody to really influence these things through the guardrails on what we don’t just like give away these privacy and bystanders consent and privacy is respected. If I don’t know, what do you think, Daniel? [00:48:30.00] Are you going to talk? You’re. Now, just a quick thought to have to

Micaela Mantegna: [00:48:40.82] You were saying, because I think that the problems that we are describing are things that are already existent and. It’s not a different set of problems or the question of magnitude we are going to have like this augmentation because now we have cell phones with cameras that can catch [00:49:00.00] a bystander. But the thing is, like now, it’s a different scale of the information that we can take from about them. So I think that the problems are already there and we just magnified that we have been working about this. We should build on that because one of the things I feel about these discussions is like, we are trying to address everything is like something from new and something that I take from [00:49:30.00] the on the Facebook friends. And for me, that’s a risk because it kind of erase discussions that we have been having about how we use algorithms and how algorithms are processing content. And I think we should start by it in this previous experience we have on privacy and surveillance and kind of adapting to this new magnitude and not forgetting that you already have a base to build on.

Daniel Leufer: [00:50:00.80] It [00:50:00.00] is been me, I think, for the question of senator, pregnancy probably makes sense to talk about the Facebook rebuttal story. And we published a blog post after they came out and you know, they made quite a big deal out of the fact that because it’s a lady and they claim their feedback and we actually participated in Xinjiang with them. I think in May 2020 and give them absolutely crystal clear [00:50:30.00] feedback that they ignored, completely ignored and to build on Michaela’s point. Think, you know, it’s really important to point out that anything you can do with the pair of air glasses you can do by pointing a camera on a smartphone that someone you know you can run a facial recognition if you have a clear view area. They made that comfortable to the public. You could run it on your phone or on the glasses. But the problem is that having cameras on the glasses, having the [00:51:00.00] ability to run these apps on the glasses is a lot less friction. There’s a basic friction of like, I point on you me pointing phone and that friction, the possibility of intervention. You know, it will solve the person from doing it. And if they’re brazen enough to do it, it gives the other person an opportunity to intervene. So when you can run that on glasses where you’re just looking at someone? Friction is gone.

Daniel Leufer: [00:51:24.49] And what we said to Facebook at the time was you reduce the friction, the [00:51:30.00] user side. By placing this capability in glasses, you need to increase it in some other way. And he could give a really annoying signal whether it’s a loud noise, a light, horrible smell. I don’t care what it is, but I need to somehow replace friction to give reintroduce the possibility of the bystander to know that this is happening and potentially to intervene. And I think as well, like the other fantastic [00:52:00.00] blog about the Ray-Bans, already pointed out the. The idea that, like our glasses have to have a front facing sensor, Savi pointed out, that doesn’t have to be a camera that can record the idea that these Ray-Ban stories are an essential step on the way to AR glasses is nonsense. We don’t need offers that look exactly like ribbons. That’s not something we need to do unless you have some kind of sinister to [00:52:30.00] realize the panopticon in everyday life. So I think it’s a difficult challenge like I would never, you know, minimize how difficult that of alerting bystanders is. But I think Facebook rebounds, we can say that they’ve totally ignored the problem completely off their shoulders. That’s an example of them. Do for sure.

Kavya Pearlman: [00:52:58.16] Oh, and I [00:53:00.00] totally remember being on that panel and boxed dinner was a discussion, and I remember it being a very strong recommendation from multiple stakeholders that you really have to be careful about it. But that’s exactly what they did is ignored it entirely. And instead of listening to the recommendations, put a little tiny light those standards there on the light. No read, no nothing but a white light which basically actually signal to an old person [00:53:30.00] like a benevolent sort of a signal. And in many cases, you could put a tape over it. It won’t be noticeable. So, yeah, I mean, it just goes to show that oftentimes this sort of voices that we try to reach out to them. These are not heard and in which case we need some regulatory, some some heavy hand that will say, Hey, can’t do this. Human beings surveil them on a daily basis. This.

Micaela Mantegna: [00:53:58.63] It I’m sorry that [00:54:00.00] that ableist assumption, because having a light to to say that I’m recording, it’s not going to work for a lot of people. And one of the things I’m saying about the promises at the reality is like to be for certain kind of person, not only able see and listen and audiences, but also to access to a technology. This is going to be a huge gap. I come from Argentina and based in Argentina. [00:54:30.00] Assistive technology is so difficult and there’s going to be a knowledge gap that’s going to be an access gap. And we don’t want that because the thing about the smartphones and the one thing about the internet that we already have, like you can go to any site mostly with kind of devices that is going to matter. But this is going to be really different in this kind of immersive environment. The hardware that you have [00:55:00.00] to access is going to matter a lot, and there is people that is going to be left out. It’s kind of for me, it’s like the Internet of Adelaide’s assumptions because it is we have adaptive technologies to with this, but how it is going to translate it into the metaverse.

Kurt Opsahl: [00:55:22.57] All right. Well, we’re almost out of time. We started a little bit late, so maybe go for a moment. We had a couple of questions [00:55:30.00] that came in. One was how we put both of them out there so we can try and cover them at the same time. One was how we address metadata privacy in the metaverse. And the other was thinking about Neuralink and Universal VR.

Kavya Pearlman: [00:55:49.96] They both of them are very, very important. So I don’t know. We talk about addressing the data part [00:56:00.00] because we are currently working on this are Data Classification Working Group. It is a public working group. If you are a developer, a company, the civic organization can be a part of this working group. Understand the context and the issue. As once we understand the context, then we can start to apply the rule. But I think that’s one of the steps that I believe is the new. You know, really, [00:56:30.00] really. There’s good Typekit.

Daniel Leufer: [00:56:36.99] To jump in just because Irene brings up the question of computer interfaces. Yeah, just make sure that can buy and we are hearing some great episodes on this. You know, I think the idea that like a headset could integrate the and could be mining this incredibly deep [00:57:00.00] information about you is extremely worrying. But one thing that I would like to point out about this is, you know, more data, deeper data doesn’t necessarily lead to better inference. It’s not as simple. You know, you collect data and then you can know everything about a person. There’s a whole lot of assumptions. There’s a whole lot of often really content about how you construct, you know, there’s human emotion fodder, all of these things and a lot of the theories that are being used [00:57:30.00] in Silicon Valley that are informing the technologies are very simplistic, and they actually have a reductive potential to limit the scope of our human experience. So I think it’s really important not to see a deeper, a more intimate data. We can make better inferences about people. We can really get to a dangerous place where we’re coming up with ideas and conceptions all people are and using those [00:58:00.00] to build worlds, inform how people interact with them can really radically undermine very important things about us. There’s a quote from Hannah Arendt. Favorite philosopher says that to paraphrase that, the danger is not that reductive theories of human behavior are true, but that become true. I think it’s doing badly could really.

Kavya Pearlman: [00:58:27.08] Especially when you talk about the Valley. This is one thing [00:58:30.00] that I’m seeing is they are kind of pulling everything back in the days when the cigarette products were sold as entertainment. Know it’s just spinning, whatever, you know, and now we’ve just been the metaverse and then combined me to drink a beer wine with artificial. This is some monster in the making. You better understand us. Otherwise, the Brits can kill you took us [00:59:00.00] only like X amount of years to understand the skin to.

Kurt Opsahl: [00:59:05.87] All right, well, we’re sorry, go ahead.

Micaela Mantegna: [00:59:10.40] Just a quick thought about about mental inferences that circle back to the promise of being abused and police have in mind Clearview AI is with that.

Kurt Opsahl: [00:59:25.26] So thanks. Thanks, everybody, for coming. For those watching on [00:59:30.00] our live stream from here virtually to our place, thanks to all of our panelists for joining us as we are trying to address these issues. Zver or the Metaverse, all of this will become more prominent, and it’s good to get in now to try and ask these questions, look at your answer and try and find a future that we would live in that incorporates these technologies, but doesn’t turn them into panopticon and turn into a surveillance state. Looking into [01:00:00.00] your mall and looking for surveillance. You take advantage of eight. Won’t take advantage of you, and that’s the future we want. So thank you, everybody. And that be a wrap.

Kavya Pearlman: [01:00:13.61] Thank you.

Micaela Mantegna: [01:00:15.11] Thanks, everyone