Biometric Authentication Achilles Heel for Metaverse Security

0

Trend Micro Incorporated today released a new report warning that exposed biometric data creates a serious authentication risk across a wide range of digital scenarios, including the metaverse.

William Malik, vice president of infrastructure strategies at Trend Micro said, “The use of biometrics is championed by some as a more secure, easier to use alternative for passwords. However, unlike passwords, our features can’t be easily changed. So a compromise could have a long-lasting impact on users. Hijacking a user’s metaverse profile in the future could be similar to gaining complete access to their PC today.”

Trend Micro defines the metaverse as: “a cloud distributed, multi-vendor, an immersive-interactive operating environment that users can access through different categories of connected devices.”

As such, those able to impersonate individuals inside this new iteration of the web could gain access to everything from online banking accounts and cryptocurrency stores to highly sensitive corporate data.

As outlined in the report, threat actors in the future may be able to use stolen or leaked biometric data to trick connected devices, such as VR/AR headsets, into logging them in as someone else. That could open the door to data theft, fraud, extortion, and much more.

Metaverse user profiles may also be an attractive target as a valuable source of additional biometric data, such as detailed 3D user models that mimics a person’s real-life bio features.

In this new computing environment, two of the three factors typically used to authenticate will be registered with the software maintaining the metaverse, for example.

Trend Micro’s report is intended to generate more dialog in the IT and security community about how to head off such potential risks. It warns that huge volumes of biometrics details, including face, voice, iris, palm, and fingerprint patterns, are already being exposed online in high enough quality to trick authentication systems.

It can be found in images and audio content posted on social media and messaging platforms, news media sites, and government portals that people use every day.

As well as helping threat actors bypass authentication checks, judicious use of leaked or stolen biometric data could also help to create deepfake models en masse, the report warns

Share.