Create an XR Drawing using Logitech’s MX Ink Stylus

  • 21 September 2024
  • 21 September 2024
  • 7 min read

Hello XR Developers! Today, we’re taking a deep dive into creating your very own drawing app using the Logitech MX-Ink stylus with Meta Quest. If you’re familiar with apps like ShapesXR or PaintingVR, you’ll love this new way of sketching and painting inside your headset. In this tutorial, we’ll walk through how to set up the stylus, explore the sample assets from Logitech, and create a fully functional drawing app. Let’s get started!

Setting Up the Logitech MX-Ink Stylus

  1. Visit the Logitech GitHub Page:
    • Head over to the Logitech MX-Ink Unity Integration Guide and download the MX Ink Unity Package.
    • This package comes with all the sample code, stylus mapping, 3D models, and prefabs for your app.
  2. Download the Input Action Set:
    • Starting with Meta version 68, Meta Quest supports the MX-Ink Stylus through an Input Action Set, which you can also download from the GitHub page.
    • This set includes the new OpenXR Interaction Profile for the Logitech pen, so you don’t need to manually bind input actions.
  3. Check for Meta SDK Compatibility:
    • Ensure that you have Meta Core SDK version 68.0.2 or newer installed in your Unity project. You can check and install this through the Package Manager.
    • Add the Input Action Map by going to Edit > Project Settings > Meta XR and reference your downloaded map under Input Actions.

Setting Up the Stylus in Unity

  1. Import Assets:
    • Import the Unity Package from the GitHub page into your project.
    • Add the stylus model to your scene—no need to parent it to any controllers. The model will track the stylus input on its own.
  2. Create the StylusHandler Script:
    • This script will handle stylus inputs like pressure sensitivity, buttons for grabbing and deleting, and haptic feedback.
    • Define variables for controlling haptic feedback, including duration, amplitude, and a threshold to trigger the vibration when enough pressure is applied.
    • Declare events like OnFrontPressed, OnBackReleased, and OnDocked to trigger interactions based on the stylus state.
  3. Stylus Input and Pose Handling:
    • Use UpdateStylusPose to determine if the stylus is active and in which hand, updating its position and rotation accordingly.
    • Manage stylus pressure and button states using methods like GetActionStateFloat and GetActionStateBoolean.
  4. Generate Haptic Feedback:
    • If the stylus’s pressure exceeds the threshold, generate haptic feedback through the OVRPlugin to vibrate the hand holding the stylus.

Building the Drawing Logic

  1. Create the Drawing Script:
    • This script controls the drawing and manipulation of lines using Unity’s LineRenderer component.
    • Define variables to store all drawn lines, manage the width of lines based on stylus pressure, and set colors for drawing and highlighting lines.
  2. Start Drawing:
    • When the user presses the stylus tip, the app enters Drawing mode, creating a new line using the LineRenderer.
    • As the stylus moves, the script adds points to the line and adjusts its width dynamically based on the pressure applied.
  3. Highlight and Move Lines:
    • Use proximity detection to highlight lines when the stylus is close. Change the line’s color to highlightColor when it is near enough.
    • Once a line is highlighted, users can grab and move it in 3D space by adjusting the points in the LineRenderer.
  4. Delete Lines:
    • Pressing the back button on the stylus will delete the currently highlighted line from the scene.

Setting Up the Scene

  1. Attach Scripts:
    • Add the StylusHandler and Drawing scripts to the stylus object in your scene.
    • Set your desired colors, thresholds, and event handlers for a fully interactive experience.
  2. Pairing the Stylus with Meta Quest:
    • Open the Meta Horizon App on your phone, go to your headset settings, and select Controllers.
    • Click on “Pair New Controller,” choose “Pair Stylus,” and press the menu and back buttons on the stylus for 4 seconds to pair it with your headset.

Testing Your Drawing App

  • Try Drawing:
    • Once paired, test drawing by pressing the stylus tip or middle button. You’ll see the line width adjust according to the pressure applied.
    • Use the front button to grab and move drawings, and the back button to delete them.
    • Toggle between VR and passthrough mode using the stylus buttons, creating a seamless experience.

Support Black Whale🐋

Thank you for following this article. Stay tuned for more in-depth and insightful content in the realm of XR development! 🌐

Did this article help you out? Consider supporting me on Patreon, where you can find all the source code, or simply subscribe to my YouTube channel!

Hello XR Developers! In today’s tutorial, we’re diving into how to integrate Meta’s Voice SDK with Large Language Models (LLMs) like Meta’s Llama, using Amazon Bedrock. This setup will allow you to send voice commands, process them through LLMs, and receive spoken responses, creating natural conversations with NPCs or assistants in your Unity game.

Let’s break down the process step-by-step!

Setting Up Amazon Bedrock

  1. Accessing Amazon Bedrock:
    • Visit Amazon Bedrock and sign in to your console. If you don’t have an account, register a new one.
    • Navigate to your account name and select Security Credentials. Here, create an Access Key and a Secret Access Key under “Access Key”.
  2. Request Model Access:
    • In your Bedrock console, go to Bedrock Configurations > Model Access.
    • Select Meta’s Llama 3 (or any other model) and click on “Request model access”.
    • Depending on your region, you might need to select the closest server that hosts your desired model, e.g., London for Llama 3 or Oregon for Llama 3.1.
  3. Check Pricing:

Setting Up Unity for LLM Integration

  1. Install NuGet for Unity:
    • Download the latest NuGet for Unity package from GitHub.
    • Drag the downloaded package into your Unity project. This adds a “NuGet” menu to your Unity Editor.
  2. Install Required Packages:
    • If using Unity 2022.3.20 or later, ensure that Newtonsoft.Json is installed via the Unity Package Manager.
    • Install the Amazon Bedrock Runtime package via the NuGet package manager.
  3. Create Amazon Bedrock Connection Script:
    • Create a new script called AmazonBedrockConnection.
    • Utilize the Amazon and Amazon Bedrock namespaces to manage AWS credentials and interact with Bedrock.

Developing the Interaction Logic

  1. Set Up AWS Credentials:
    • In your AmazonBedrockConnection class, define fields for accessKeyId and secretAccessKey.
    • Set these values in Unity’s inspector for easy access and management.
  2. Build the UI:
    • Create two text fields for displaying the prompt and response, an input field for typing prompts, and a button to send the prompt.
    • Reference Meta’s Voice SDK text-to-speech (TTS) component to speak the AI’s response.
  3. Integrate AI Prompt Logic:
    • Create a SendPrompt method to handle the core functionality:
      • Update the UI with the user’s input.
      • Format the input for the AI model and send it using InvokeModelAsync.
      • Process the AI’s response and display it in the UI.
      • Use the TTS component to vocalize the response.
  4. Configure Unity Scene:
    • Add the AmazonBedrockConnection script to an empty GameObject.
    • Input your AWS credentials directly from the console.
    • Set up the UI elements (text fields, input field, button).
    • Add a TTS Speaker by navigating to Assets > Create > Voice SDK > TTS > Add TTS Speaker to scene.
    • Customize the speaker’s voice if desired.

Combining with Wake Word Detection

  1. Enhance the Voice Manager:
    • Open your Voice Manager script from the previous tutorial.
    • Add a reference to the AmazonBedrockConnection script.
    • After receiving the full transcription, pass it as a prompt to the SendPrompt method.
  2. Test the Integration:
    • Reference the AmazonBedrockConnection script in your Voice Manager within Unity.
    • Test the entire workflow: use a wake word, speak a command, and receive a response from the AI model.

Final Thoughts

Congratulations! You’ve successfully set up a system where you can speak commands, process them through an LLM like Meta’s Llama, and have the response spoken back to you. This powerful combination opens up exciting possibilities for creating interactive experiences in your Unity projects.

Support Black Whale🐋

Thank you for following this article. Stay tuned for more in-depth and insightful content in the realm of XR development! 🌐

Did this article help you out? Consider supporting me on Patreon, where you can find all the source code, or simply subscribe to my YouTube channel!

Leave a Reply

Your email address will not be published. Required fields are marked *