SonicID: User Identification on Smart Glasses with Acoustic Sensing

Brad Magnetta
Brad Magnetta
Reviews
October 28, 2024

If you want to read more in depth about this subject, you can refer to the full article available at the following URL. It provides additional insights and practical examples to help you better understand and apply the concepts discussed.

TLDR

In this blog post, we'll be diving into SonicID, a groundbreaking user authentication system for smart glasses developed by researchers at Cornell University. SonicID uses ultrasonic waves to scan a user's face and extract unique biometric information, making it a low-power and minimally-obtrusive solution for user authentication. We'll explore the technology behind SonicID, its implications for the wearable tech industry, and how it compares to other authentication methods. We'll also provide a step-by-step guide on how to implement similar technologies in your own projects.

Introduction to SonicID

SonicID is a revolutionary user authentication system that uses ultrasonic waves to scan a user's face and extract unique biometric information. This technology, developed by researchers at Cornell University, is designed for smart glasses and uses a binary classifier with the ResNet-18 architecture to distinguish between different users. SonicID can authenticate a user in just 0.06 seconds, making it an efficient solution for wearable technology.

The system uses the shape of the user's face, scanned with acoustic signals, as biometric information. This is a significant departure from traditional authentication methods, which often require user interaction and can be intrusive. SonicID, on the other hand, is a low-power solution that maintains minimal intrusion, making it an ideal choice for wearable technology like smart glasses.

Pseudocode: Basic Authentication with ResNet-18 Model

import torch
import torchvision.models as models

# Load ResNet-18 model
model = models.resnet18(pretrained=True)

# Modify for binary classification
model.fc = torch.nn.Linear(model.fc.in_features, 2)

# Function for user authentication
def authenticate_user(acoustic_data): 
    # Process acoustic data into an input tensor 
    input_tensor = preprocess(acoustic_data) 
    output = model(input_tensor) 
    # Check if authenticated 
    return torch.argmax(output) == 1  # Returns True if authenticated


The Development of SonicID

The development of SonicID began with the recognition of the need for a more efficient and less intrusive user authentication method for wearable technology. The researchers at Cornell University saw the potential of using ultrasonic waves to scan a user's face and extract unique biometric information. This led to the creation of SonicID, a system that uses two speakers to emit encoded signals towards the user's face. The reflected signals are captured by two microphones, allowing the identification of unique acoustic features specific to the user.

The SonicID system was tested across different sessions and days, with the augmented dataset being at least twice the original size due to random noise. The system performed well across different remounting sessions on the same day, with a true positive rate (TPR) of 97.2% and a false positive rate (FPR) of 4.6%.

Pseudocode for Emitting and Capturing Signals

import time
import random

# Function to simulate the emission and capture of ultrasonic signals
def emit_and_capture_signals(): 
    # Emit encoded ultrasonic signal 
    signal = random.choice(["encoded_signal_A", "encoded_signal_B"]) 
    print("Emitting:", signal) 
    time.sleep(0.06)  # wait 0.06 seconds 
    # Capture reflected signal 
    captured_signal = f"reflection_of_{signal}" 
    print("Captured:", captured_signal) 
    return captured_signal

# Run signal emission and capture
captured_data = emit_and_capture_signals()

Implications of SonicID

The development of SonicID has significant implications for the field of wearable technology. Its low-power and minimally-obtrusive nature make it an ideal solution for user authentication on smart glasses. Moreover, SonicID's use of ultrasonic waves to extract unique biometric information could pave the way for other similar technologies in the future.

However, there are potential challenges and limitations to consider. For instance, the system's performance may vary due to factors such as the user's hair obstructing the device. Additionally, the system's performance decreased when only one side of sensors was used, indicating a potential limitation in its design.

Pseudocode: Performance Monitoring

# Function to evaluate system performance
def evaluate_performance(true_positives, false_positives, total_attempts): 
    tpr = true_positives / total_attempts * 100  # True Positive Rate 
    fpr = false_positives / total_attempts * 100  # False Positive Rate 
    return tpr, fpr

Technical Analysis of SonicID

SonicID uses a binary classifier with the ResNet-18 architecture to distinguish between different users. This deep learning model is known for its ability to effectively classify images, making it a suitable choice for SonicID's purpose of identifying unique acoustic features.

The system uses two speakers to emit encoded signals towards the user's face. These signals are reflected back and captured by two microphones, allowing the system to identify unique acoustic features specific to the user. This process generates an Echo Profile that encapsulates the user's biometric information.

Pseudocode for ResNet-18 User Authentication Model

import torch
import torchvision.models as models
import torch.nn as nn

# Define a modified ResNet-18 model for binary classification
class SonicIDModel(nn.Module): 
    def __init__(self): 
        super(SonicIDModel, self).__init__() 
        self.base_model = models.resnet18(pretrained=True) 
        # Modify the final layer for binary classification (user vs. non-user) 
        self.base_model.fc = nn.Linear(self.base_model.fc.in_features, 2) 

    def forward(self, x): 
        return self.base_model(x)

# Instantiate the model
model = SonicIDModel()

# Example of running a forward pass (assuming input signal is processed as a tensor)
input_signal = torch.randn(1, 3, 224, 224)  # Mock input
output = model(input_signal) 
print("Output logits:", output)

Applying SonicID in Your Own Projects

To implement a similar technology in your own projects, you'll first need to understand the basics of the ResNet-18 architecture and how it can be used for binary classification. You'll also need to familiarize yourself with the use of ultrasonic waves for biometric identification.

The next step is to set up the hardware. SonicID uses two speakers to emit encoded signals and two microphones to capture the reflected signals. You'll need to ensure that these components are correctly set up and calibrated.

Finally, you'll need to train your model using a dataset of acoustic features. This will likely involve multiple sessions and may require data augmentation to account for random noise.

Pseudocode for Data Collection and Model Training

# Sample pseudocode for data collection and model training
collected_data = []
labels = []

# Collect data with multiple sessions to capture acoustic features
for session in range(num_sessions): 
    for user in user_list: 
        signal = emit_and_capture_signals() 
        collected_data.append(signal) 
        labels.append(user)

# Convert collected data to tensor and create DataLoader
from torch.utils.data import DataLoader, TensorDataset
data_tensor = torch.tensor(collected_data) 
labels_tensor = torch.tensor(labels) 
dataset = TensorDataset(data_tensor, labels_tensor) 
dataloader = DataLoader(dataset, batch_size=16, shuffle=True)

# Training loop
optimizer = torch.optim.Adam(model.parameters(), lr=0.001) 
criterion = nn.CrossEntropyLoss() 
for epoch in range(num_epochs): 
    for batch in dataloader: 
        inputs, targets = batch 
        optimizer.zero_grad() 
        outputs = model(inputs) 
        loss = criterion(outputs, targets) 
        loss.backward() 
        optimizer.step() 
    print(f"Epoch {epoch+1}/{num_epochs}, Loss: {loss.item()}")

Conclusion

SonicID represents a significant advancement in the field of user authentication for wearable technology. Its use of ultrasonic waves to extract unique biometric information offers a low-power and minimally-obtrusive solution that could pave the way for future developments in this field. As we continue to explore the potential of wearable technology, systems like SonicID will undoubtedly play a crucial role.

FAQ

Q1: What is SonicID?

A1: SonicID is a user authentication system for smart glasses that uses ultrasonic waves to scan a user's face and extract unique biometric information.

Q2: How does SonicID work?

A2: SonicID uses two speakers to emit encoded signals towards the user's face. The reflected signals are captured by two microphones, allowing the identification of unique acoustic features specific to the user.

Q3: What makes SonicID different from other authentication methods?

A3: Unlike other authentication methods, SonicID does not require user interaction, maintains low power, and is minimally intrusive.

Q4: What are the potential challenges of SonicID?

A4: Potential challenges include the user's hair obstructing the device and a decrease in performance when only one side of sensors is used.

Q5: How can I implement a similar technology in my own projects?

A5: To implement a similar technology, you'll need to understand the ResNet-18 architecture, set up the necessary hardware, and train your model using a dataset of acoustic features.

Q6: What are the implications of SonicID for the future of wearable technology?

A6: SonicID's low-power and minimally-obtrusive nature make it an ideal solution for user authentication on smart glasses, potentially paving the way for other similar technologies in the future.

Try Modlee for free

Simplify ML development 
and scale with ease

Share this post:
Explore more topics :
More like this :

Try Modlee for free

Simplify ML development 
and scale with ease

Simplify ML development 
and scale with ease

Join the researchers and engineers who use Modlee

Join us in shaping the AI era

MODLEE is designed and maintained for developers, by developers passionate about evolving the state of the art of AI innovation and research.

Sign up for our newsletter