New research suggests that privacy in the metaverse might be impossible
Check out all the on-demand sessions from the Intelligent Security Summit here.
A new paper from the University of California Berkeley reveals that privacy may be impossible in the metaverse without innovative new safeguards to protect users.
Led by graduate researcher Vivek Nair, the recently released study was conducted at the Center for Responsible Decentralized Intelligence (RDI) and involved the largest dataset of user interactions in virtual reality (VR) that has ever been analyzed for privacy risks.
What makes the results so surprising is how little data is actually needed to uniquely identify a user in the metaverse, potentially eliminating any chance of true anonymity in virtual worlds.
Simple motion data not so simplistic
As background, most researchers and policymakers who study metaverse privacy focus on the many cameras and microphones in modern VR headsets that capture detailed information about the user’s facial features, vocal qualities and eye motions, along with ambient information about the user’s home or office.
GamesBeat Summit 2023
Join the GamesBeat community in Los Angeles this May 22-23. You’ll hear from the brightest minds within the gaming industry to share their updates on the latest developments.
Some researchers even worry about emerging technologies like EEG sensors that can detect unique brain activity through the scalp. While these rich data streams pose serious privacy risks in the metaverse, turning them all off may not provide anonymity.
That’s because the most basic data stream needed to interact with a virtual world — simple motion data — may be all that’s required to uniquely identify a user within a large population.
And by “simple motion data,” I mean the three most basic data points tracked by virtual reality systems – one point on the user’s head and one on each hand. Researchers often refer to this as “telemetry data” and it represents the minimal dataset required to allow a user to interact naturally in a virtual environment.
Unique identification in seconds
This brings me to the new Berkeley study, “Unique Identification of 50,000-plus Virtual Reality Users from Head and Hand Motion Data.” The research analyzed more than 2.5 million VR data recordings (fully anonymized) from more than 50,000 players of the popular Beat Saber app and found that individual users could be uniquely identified with more than 94% accuracy using only 100 seconds of motion data.
Even more surprising was that half of all users could be uniquely identified with only 2 seconds of motion data. Achieving this level of accuracy required innovative AI techniques, but again, the data used was extremely sparse — just three spatial points for each user tracked over time.
In other words, any time a user puts on a mixed reality headset, grabs the two standard hand controllers and begins interacting in a virtual or augmented world, they are leaving behind a trail of digital fingerprints that can uniquely identify them. Of course, this begs the question: How do these digital fingerprints compare to actual real-world fingerprints in their ability to uniquely identify users?
If you ask people on the street, they’ll tell you that no two fingerprints in the world are the same. This may or may not be true, but honestly, it doesn’t matter. What’s important is how accurately you can identify an individual from a fingerprint that was left at a crime scene or input to a finger scanner. It turns out that fingerprints, whether lifted from a physical location or captured by the scanner on your phone, are not as uniquely identifiable as most people assume.
Let’s consider the act of pressing your finger to a scanner. According to the National Institute of Standards and Technology (NIST) the desired benchmark for fingerprint scanners is a unique matching with an accuracy of 1 out of 100,000 people.
That said, real-world testing by NIST and others have found that the true accuracy of most fingerprint devices may be less than 1 out of 1,500. Still, that makes it extremely unlikely that a criminal who steals your phone will be able to use their finger to gain access.
On the other hand, the Berkeley study suggests that when a VR user swings a virtual saber at an object flying towards them, the motion data they leave behind may be more uniquely identifiable than their actual real-world fingerprint.
This poses a very serious privacy risk, as it potentially eliminates anonymity in the metaverse. In addition, this same motion data can be used to accurately infer a number of specific personal characteristics about users, including their height, handedness and gender.
And when combined with other data commonly tracked in virtual and augmented environments, this motion-based fingerprinting method is likely to yield even more accurate identifications.
I asked Nair to comment on my comparison above between traditional fingerprint accuracy and the use of motion data as “digital fingerprints” in virtual and augmented environments.
He described the danger this way: “Moving around in a virtual world while streaming basic motion data would be like browsing the internet while sharing your fingerprints with every website you visit. However, unlike web-browsing, which does not require anyone to share their fingerprints, the streaming of motion data is a fundamental part of how the metaverse currently works.”
To give you a sense of how insidious motion-based fingerprinting could be, consider the metaverse of the near future: A time when users routinely go shopping in virtual and augmented worlds. Whether browsing products in a virtual store or visualizing how new furniture might look in their real apartment using mixed reality eyewear, users are likely to perform common physical motions such as grabbing virtual objects off virtual shelves or taking a few steps back to get a good look at a piece of virtual furniture.
The Berkeley study suggests that these common motions could be as unique to each of us as fingerprints. If that’s the case, these “motion prints” as we might call them, would mean that casual shoppers wouldn’t be able to visit a virtual store without being uniquely identifiable.
So, how do we solve this inherent privacy problem?
One approach is to obscure the motion data before it is streamed from the user’s hardware to any external servers. Unfortunately, this means introducing noise. This could protect the privacy of users but it would also reduce the precision of dexterous physical motions, thereby compromising user performance in Beat Saber or any other application requiring physical skill. For many users, it may not be worth the tradeoff.
An alternate approach is to enact sensible regulation that would prevent metaverse platforms from storing and analyzing human motion data over time. Such regulation would help protect the public, but it would be difficult to enforce and could face pushback from the industry.
For these reasons, researchers at Berkeley are exploring sophisticated defensive techniques that they hope will obscure the unique characteristics of physical motions without degrading dexterity in virtual and augmented worlds.
As an outspoken advocate for consumer protections in the metaverse, I strongly encourage the field to explore all approaches in parallel, including both technical and policy solutions.
Protecting personal privacy is not just important for users, it’s important for the industry at large. After all, if users don’t feel safe in the metaverse, they may be reluctant to make virtual and augmented environments a significant part of their digital lives.
Dr. Louis Rosenberg is CEO of Unanimous AI, chief scientist of the Responsible Metaverse Alliance and global technology advisor to XRSI. Rosenberg is an advisor to the team that conducted the Berkeley study above.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!