Hello XR Developers! Today, we’re taking a deep dive into creating your very own drawing app using the Logitech MX-Ink stylus with Meta Quest. If you’re familiar with apps like ShapesXR or PaintingVR, you’ll love this new way of sketching and painting inside your headset. In this tutorial, we’ll walk through how to set up the stylus, explore the sample assets from Logitech, and create a fully functional drawing app. Let’s get started!
Setting Up the Logitech MX-Ink Stylus
- Visit the Logitech GitHub Page:
- Head over to the Logitech MX-Ink Unity Integration Guide and download the MX Ink Unity Package.
- This package comes with all the sample code, stylus mapping, 3D models, and prefabs for your app.
- Download the Input Action Set:
- Starting with Meta version 68, Meta Quest supports the MX-Ink Stylus through an Input Action Set, which you can also download from the GitHub page.
- This set includes the new OpenXR Interaction Profile for the Logitech pen, so you don’t need to manually bind input actions.
- Check for Meta SDK Compatibility:
- Ensure that you have Meta Core SDK version 68.0.2 or newer installed in your Unity project. You can check and install this through the Package Manager.
- Add the Input Action Map by going to
Edit > Project Settings > Meta XR
and reference your downloaded map under Input Actions.
Setting Up the Stylus in Unity
- Import Assets:
- Import the Unity Package from the GitHub page into your project.
- Add the stylus model to your scene—no need to parent it to any controllers. The model will track the stylus input on its own.
- Create the StylusHandler Script:
- This script will handle stylus inputs like pressure sensitivity, buttons for grabbing and deleting, and haptic feedback.
- Define variables for controlling haptic feedback, including duration, amplitude, and a threshold to trigger the vibration when enough pressure is applied.
- Declare events like
OnFrontPressed
,OnBackReleased
, andOnDocked
to trigger interactions based on the stylus state.
- Stylus Input and Pose Handling:
- Use
UpdateStylusPose
to determine if the stylus is active and in which hand, updating its position and rotation accordingly. - Manage stylus pressure and button states using methods like
GetActionStateFloat
andGetActionStateBoolean
.
- Use
- Generate Haptic Feedback:
- If the stylus’s pressure exceeds the threshold, generate haptic feedback through the OVRPlugin to vibrate the hand holding the stylus.
Building the Drawing Logic
- Create the Drawing Script:
- This script controls the drawing and manipulation of lines using Unity’s
LineRenderer
component. - Define variables to store all drawn lines, manage the width of lines based on stylus pressure, and set colors for drawing and highlighting lines.
- This script controls the drawing and manipulation of lines using Unity’s
- Start Drawing:
- When the user presses the stylus tip, the app enters
Drawing
mode, creating a new line using theLineRenderer
. - As the stylus moves, the script adds points to the line and adjusts its width dynamically based on the pressure applied.
- When the user presses the stylus tip, the app enters
- Highlight and Move Lines:
- Use proximity detection to highlight lines when the stylus is close. Change the line’s color to
highlightColor
when it is near enough. - Once a line is highlighted, users can grab and move it in 3D space by adjusting the points in the
LineRenderer
.
- Use proximity detection to highlight lines when the stylus is close. Change the line’s color to
- Delete Lines:
- Pressing the back button on the stylus will delete the currently highlighted line from the scene.
Setting Up the Scene
- Attach Scripts:
- Add the
StylusHandler
andDrawing
scripts to the stylus object in your scene. - Set your desired colors, thresholds, and event handlers for a fully interactive experience.
- Add the
- Pairing the Stylus with Meta Quest:
- Open the Meta Horizon App on your phone, go to your headset settings, and select Controllers.
- Click on “Pair New Controller,” choose “Pair Stylus,” and press the menu and back buttons on the stylus for 4 seconds to pair it with your headset.
Testing Your Drawing App
- Try Drawing:
- Once paired, test drawing by pressing the stylus tip or middle button. You’ll see the line width adjust according to the pressure applied.
- Use the front button to grab and move drawings, and the back button to delete them.
- Toggle between VR and passthrough mode using the stylus buttons, creating a seamless experience.
Support Black Whale🐋
Thank you for following this article. Stay tuned for more in-depth and insightful content in the realm of XR development! 🌐
Did this article help you out? Consider supporting me on Patreon, where you can find all the source code, or simply subscribe to my YouTube channel!
Hello XR Developers! In today’s tutorial, we’re diving into how to integrate Meta’s Voice SDK with Large Language Models (LLMs) like Meta’s Llama, using Amazon Bedrock. This setup will allow you to send voice commands, process them through LLMs, and receive spoken responses, creating natural conversations with NPCs or assistants in your Unity game.
Let’s break down the process step-by-step!
Setting Up Amazon Bedrock
- Accessing Amazon Bedrock:
- Visit Amazon Bedrock and sign in to your console. If you don’t have an account, register a new one.
- Navigate to your account name and select Security Credentials. Here, create an Access Key and a Secret Access Key under “Access Key”.
- Request Model Access:
- In your Bedrock console, go to Bedrock Configurations > Model Access.
- Select Meta’s Llama 3 (or any other model) and click on “Request model access”.
- Depending on your region, you might need to select the closest server that hosts your desired model, e.g., London for Llama 3 or Oregon for Llama 3.1.
- Check Pricing:
- Be aware of the pricing for different models by visiting Amazon Bedrock Pricing.
Setting Up Unity for LLM Integration
- Install NuGet for Unity:
- Download the latest NuGet for Unity package from GitHub.
- Drag the downloaded package into your Unity project. This adds a “NuGet” menu to your Unity Editor.
- Install Required Packages:
- If using Unity 2022.3.20 or later, ensure that
Newtonsoft.Json
is installed via the Unity Package Manager. - Install the Amazon Bedrock Runtime package via the NuGet package manager.
- If using Unity 2022.3.20 or later, ensure that
- Create Amazon Bedrock Connection Script:
- Create a new script called
AmazonBedrockConnection
. - Utilize the Amazon and Amazon Bedrock namespaces to manage AWS credentials and interact with Bedrock.
- Create a new script called
Developing the Interaction Logic
- Set Up AWS Credentials:
- In your
AmazonBedrockConnection
class, define fields foraccessKeyId
andsecretAccessKey
. - Set these values in Unity’s inspector for easy access and management.
- In your
- Build the UI:
- Create two text fields for displaying the prompt and response, an input field for typing prompts, and a button to send the prompt.
- Reference Meta’s Voice SDK text-to-speech (TTS) component to speak the AI’s response.
- Integrate AI Prompt Logic:
- Create a
SendPrompt
method to handle the core functionality:- Update the UI with the user’s input.
- Format the input for the AI model and send it using
InvokeModelAsync
. - Process the AI’s response and display it in the UI.
- Use the TTS component to vocalize the response.
- Create a
- Configure Unity Scene:
- Add the
AmazonBedrockConnection
script to an empty GameObject. - Input your AWS credentials directly from the console.
- Set up the UI elements (text fields, input field, button).
- Add a TTS Speaker by navigating to
Assets > Create > Voice SDK > TTS > Add TTS Speaker to scene
. - Customize the speaker’s voice if desired.
- Add the
Combining with Wake Word Detection
- Enhance the Voice Manager:
- Open your Voice Manager script from the previous tutorial.
- Add a reference to the
AmazonBedrockConnection
script. - After receiving the full transcription, pass it as a prompt to the
SendPrompt
method.
- Test the Integration:
- Reference the
AmazonBedrockConnection
script in your Voice Manager within Unity. - Test the entire workflow: use a wake word, speak a command, and receive a response from the AI model.
- Reference the
Final Thoughts
Congratulations! You’ve successfully set up a system where you can speak commands, process them through an LLM like Meta’s Llama, and have the response spoken back to you. This powerful combination opens up exciting possibilities for creating interactive experiences in your Unity projects.
Support Black Whale🐋
Thank you for following this article. Stay tuned for more in-depth and insightful content in the realm of XR development! 🌐
Did this article help you out? Consider supporting me on Patreon, where you can find all the source code, or simply subscribe to my YouTube channel!