Apple Vision Pro Input & Object Manipulation with Unity PolySpatial

  • 8 April 2024
  • 8 April 2024
  • 3 min read

Introduction 🌟

XR Developers, get ready to dive into 3D touch input for VisionOS devices using Unity! This comprehensive guide will walk you through setting up your project, testing scenes on your device, and implementing sophisticated input actions to bring your Unity applications to life. Whether you’re looking to pop balloons in a gallery or manipulate objects with a pinch, this tutorial has got you covered.

Getting Started 🚀

Watch a full setup here!

  1. Initial Setup: Begin with a fresh Unity project. Ensure you’ve selected the right target SDK for your device. If you’re new to this, check out our previous video on installing necessary packages.
  2. Install Required Packages: Navigate to the package manager and install the “Play to Device” input settings and the Unity PolySpatial Samples from the PolySpatial package. Don’t forget to add all scenes to your build settings for comprehensive testing later.
  3. Testing with Play to Device: Open the Balloon Gallery sample and use the “Play to Device” feature for a seamless test on your Apple Vision Pro. Join the TestFlight Beta for the “Play to Device” app as instructed on our website, install it on your device, and connect it to your Unity Editor via the displayed local IP-address. Now, you’re set to manipulate the scene directly on your device!

Exploring Unity Samples 🎈

  • Experiment with the Object Manipulation scene by moving cubes with a pinch gesture. Unity offers numerous sample scenes for you to test and enjoy. The interactive experience on the device is smooth and intuitive.

Advanced Testing Methods 📱

  • For a true representation of your application, build a Xcode project. Ensure your device is connected to your MacBook and enter developer mode for seamless deployment. This method allows you to test your Unity scenes on the actual operating system.

Implementing Spatial Input 🌐

  • Dive into the “Input Data Visualisation” scene to explore Unity’s use of Input Action References. While Unity offers sophisticated implementations, we recommend starting with the Spatial Pointer Device for most applications due to its straightforward approach to 3D touch input.

Building Your Scene 🔨

  1. Scene Setup: Create a new scene with a regular and a Volume Camera. Decide on your app’s bounded settings and add the necessary components for a tailored experience.
  2. Object Manipulation: Add a cube with a collider to your scene. Implement the Vision OS Hover Effect and Grounding Shadow for realistic interactions. Add an empty game-object named “Input Manager” for your logic, and disable the “Input System UI Input Module” in the Event-system.
  3. Input Handling: Utilize the “Color Change On Input” script to change the cube’s color upon interaction. Our script leverages the Enhanced Touch API and the “InputSystem.LowLevel” namespace for precise input detection and object manipulation.

Advanced Interactions 🌈

  • Tailor your application’s interaction logic to distinguish between direct touches and pinches, allowing for more nuanced control over object manipulation. Experiment with moving objects within the bounded volume, providing a rich interactive experience.

Conclusion 🎉

You’re now equipped to implement your own interactions for Apple Vision Pro applications. This guide has laid the groundwork for you to experiment and innovate within the Unity framework for VisionOS devices. We can’t wait to see what you’ll create!

Support Black Whale🐋

Thank you for following this article. Stay tuned for more in-depth and insightful content in the realm of XR development! 🌐

Did this article help you out? Consider supporting me on Patreon, where you can find all the source code, or simply subscribe to my YouTube channel!

Leave a Reply

Your email address will not be published. Required fields are marked *