Over the past few months, Apple experts have fielded questions about visionOS in Apple’s Vision Pro developer labs around the world. Here are answers to some of the most frequently asked questions, including insights into new concepts like elements, immersion spaces, collision shapes, and more.
How can I communicate with an entity using symbols?
There are three important components to enabling token-based entity interaction:
- The entity must have an inputTargetComponent. Otherwise, signal input will never be received.
- It should have a legal conflict section. The collision entity shapes define the ranges that symbols can actually hit, so make sure the collision shapes are specified correctly to communicate with your entity.
- The gesture you are using should be aimed at the entity (or any entity) you are trying to communicate with. example:
private var tapGesture: some Gesture {
TapGesture()
.targetedToAnyEntity()
.onEnded { gestureValue in
let tappedEntity = gestureValue.entity
print(tappedEntity.name)
}
}
It’s also a good idea to provide an interactive component HoverEffectComponent that allows the system to trigger a standard highlight effect when the user hovers over the entity.
Should I use a window group, an embedding area, or both?
Consider the technical differences between windows, volumes, and immersion slots when deciding which scene type to use for a particular feature in your app.
Here are some significant technical differences to consider in your decision.
- Windows and volumes from other applications the user has open are hidden when the dock is opened.
- Windows and volumes clip content beyond their limits.
- Users have full control over the placement and sizes of windows. Apps have full control over the placement of content in an immersive space.
- Volumes have a fixed size, windows are adjustable.
- ARKit will only deliver data to your app if it has open embedding space.
Explore the Hello World sample code to familiarize yourself with the features of each scene type in visionOS.
How can I see conflicting patterns in my scene?
Use the Conflict Shapes Debug visualization in the Debug Visualizations menu, where you can find several other helpful debugging visualizations. For information about debugging views, see Diagnosing problems with a running application.
Can I place SwiftUI views in an immersive environment?
Yes! You can reposition SwiftUI views with the offset(x:y:) and offset(z:) methods. It is important to remember that these offsets are specified in points, not meters. You can use PhysicalMetric to convert meters to points.
What if I want to put my SwiftUI views on a concrete view with a component?
Use RealityView attachments to create a SwiftUI view and make it accessible as a ViewAttachmentEntity. This entity can be positioned, oriented, and scaled just like any other entity.
RealityView { content, attachments in
let attachmentEntity = attachments.entity(for: "uniqueID")!
content.add(attachmentEntity)
} attachments: {
Attachment(id: "uniqueID") {
Text("My Attachment")
}
}
Can I schedule windows?
There is no API for positioning windows, but we want to know about your use case. Please submit an update request. See Positioning and sizing windows for more information on this topic.
Is there a way to find out what the user is looking at?
As noted in Adopting Best Practices for Privacy and User Preferences, the system handles camera and sensor inputs without transmitting the data directly to applications. There is no way to get accurate eye movements or line of sight. Instead, create interface components that people can interact with and let the system manage the interaction. If you have a use case that cannot work this way and does not require clear eye tracking, please submit an upgrade request.
When are the onHover and onContinuousHover actions called on visionOS?
The onHover and onContinuousHover actions are called when the finger hovers over the view or, if the trackpad is connected, when the cursor hovers over the view.
Can I display my own immersive environment textures in my app?
If your app has ImmersiveSpace open, you can create a large sphere with UnlitMaterial and make it have inward-facing geometry:
struct ImmersiveView: View {
var body: some View {
RealityView { content in
do {
let mesh = MeshResource.generateSphere(radius: 10)
var material = UnlitMaterial(applyPostProcessToneMap: false)
let textureResource = try await TextureResource(named: "example")
material.color = .init(tint: .white, texture: .init(textureResource))
let entity = ModelEntity(mesh: mesh, materials: [material])
entity.scale.x *= -1
content.add(entity)
} catch {
}
}
}
}
I have existing stereo videos. How can I convert them to MV-HEVC?
AVFoundation provides APIs for writing videos in MV-HEVC format.
To convert your videos to MV-HEVC:
- Create an AVAsset for each of the left and right views.
- Use AVOutputSettingsAssistant to find output settings that work for MV-HEVC.
- Specify horizontal contrast adjustment and field of view (this is property specific). Here’s an example:
var compressionProperties = outputSettings[AVVideoCompressionPropertiesKey] as! [String: Any]
compressionProperties[kVTCompressionPropertyKey_HorizontalDisparityAdjustment as String] = horizontalDisparityAdjustment
compressionProperties[kCMFormatDescriptionExtension_HorizontalFieldOfView as String] = horizontalFOV
let taggedBuffers: [CMTaggedBuffer] = [
.init(tags: [.videoLayerID(0), .stereoView(.leftEye)], pixelBuffer: leftSample.imageBuffer!),
.init(tags: [.videoLayerID(1), .stereoView(.rightEye)], pixelBuffer: rightSample.imageBuffer!)
]
let didAppend = adaptor.appendTaggedBuffers(taggedBuffers,
withPresentationTime: leftSample.presentationTimeStamp)
How do I animate my scene with RealityKit on visionOS?
You can animate your scene with RealityKit on visionOS by:
- Using a system-provided automatic lighting environment that updates based on real-world environments.
- Providing your own image based lighting via ImageBasedLightComponent. To see an example, create a new VisionOS app, select RealityKit as the Immersive Place Builder, and select Full as the Immersive Place.
I see that CustomMaterial is not supported on visionOS. Is there a way to create objects with custom shadows?
You can create custom shaded objects in Reality Composer Pro using Shader Graph. A material created this way is accessible to your application as a ShaderGraphMaterial, so you can dynamically change resources in your code.
For a detailed introduction to shader graphs, see Materials in Reality Composer Pro.
How do I place components relative to the device layout?
In Immersive Space, you can access the full transformation of the device by using the request deviceAnchor(atTimestamp:) method.
Learn more about building apps for visionOS
Q&A: Space Design for visionOS
Get expert advice from Apple’s design team on creating experiences for Apple Vision Pro.
Look now
Focus on: Developing for visionOS
Learn how the developers behind djay, Blackbox, JigSpace, and XRHealth got started designing and building apps for Apple Vision Pro.
Look now
Focus on: Developer Tools for visionOS
Learn how developers can use Xcode, Reality Composer Pro, and the visionOS simulator to start building apps for Apple Vision Pro.
Look now
The sample code in this is provided in Apple’s sample code license.