Choreo Display SDK - v0.1.1

TypeScript types and mock SDK for building external apps that integrate with Choreo Display.

# Configure npm for @choreoai scope (one-time setup)
echo "@choreoai:registry=https://npm.pkg.github.com" >> .npmrc

# Authenticate with GitHub (use your GitHub credentials)
npm login --registry=https://npm.pkg.github.com

# Install the package
npm install @choreoai/display-sdk-types

In production, include the Choreo Display SDK script in your HTML:

<script src="https://display2.choreo.ai/sdk/choreo-display-sdk.js"></script>

This exposes window.ChoreoSDK when your app runs inside Choreo Display.

Create a sdk.ts file that switches between mock (development) and real SDK (production):

// src/sdk.ts
import type { IChoreoSDK } from '@choreoai/display-sdk-types';
import { createMockSDK } from '@choreoai/display-sdk-types/mock';

// Use mock in development, real SDK in production
export const sdk: IChoreoSDK = import.meta.env.DEV
? createMockSDK({
videos: [
{ id: 'v1', name: 'Sample Video', url: '/sample.mp4' },
],
debug: true,
})
: (window as unknown as { ChoreoSDK: IChoreoSDK }).ChoreoSDK;
import { sdk } from './sdk';

// Wait for SDK to be ready
await sdk.ready();

// Get available videos
const videos = await sdk.getVideos();
const activeVideoUrl = videos[0]?.url;

// Start recording
await sdk.startRecording({ durationMs: 10000 });

// Listen for completion
sdk.once('recordingComplete', ({ recordingId }) => {
console.log('Recording complete:', recordingId);
});

// Exit with result
await sdk.exitApp({ result: 'success' });

// Send logs to parent Sentry
sdk.log.info('Experience completed', { phase: 4, energy: 'pink' });
sdk.log.error('Camera failed', { error: 'NotAllowedError' });

// Release SDK-managed blob URLs when you no longer need them
if (activeVideoUrl) {
sdk.releaseUrl(activeVideoUrl);
}
sdk.destroy();

Methods like getVideos(), getVideoUrl(), playVideo(), and getRecording() can return SDK-managed blob URLs. These URLs remain valid for the current iframe session until one of the following happens:

  • your app explicitly calls sdk.releaseUrl(url)
  • your app calls sdk.destroy()
  • the iframe unloads

The SDK does not revoke an earlier URL just because you fetched the same asset again later in the same session.

Call sdk.releaseUrl(url) when your app has replaced a previously returned URL in a <video>/<audio>/<img> element or dropped it from component state.

let currentVideoUrl: string | null = null;

async function playVideo(videoId: string) {
const { url } = await sdk.playVideo(videoId);

if (currentVideoUrl) {
sdk.releaseUrl(currentVideoUrl);
}

currentVideoUrl = url;
videoElement.src = url;
}

Call sdk.destroy() during app teardown or page unload. This revokes every tracked SDK blob URL and removes SDK-owned event listeners.

window.addEventListener('beforeunload', () => {
sdk.destroy();
});
Option Type Default Description
videos VideoInfo[] 2 samples Mock video library
latencyMs number 0 Simulate network delay
recordingDurationMs number 5000 Recording duration
failOnUpload boolean false Simulate upload failure
failOnRecording boolean false Simulate recording failure
debug boolean false Log SDK calls

Custom video library:

const sdk = createMockSDK({
videos: [
{ id: 'hip-hop', name: 'Hip Hop Routine', url: '/videos/hiphop.mp4' },
{ id: 'ballet', name: 'Ballet Solo', url: '/videos/ballet.mp4' },
{ id: 'jazz', name: 'Jazz Combo', url: '/videos/jazz.mp4', metadata: { difficulty: 'advanced' } },
],
});

Simulate network latency:

const sdk = createMockSDK({
latencyMs: 500, // 500ms delay on all SDK calls
debug: true, // Log calls to console
});

Test error handling:

// Test recording failures
const sdk = createMockSDK({
failOnRecording: true,
});

sdk.on('recordingError', ({ error }) => {
console.error('Recording failed:', error);
});

// Test upload failures
const sdk = createMockSDK({
failOnUpload: true,
});

sdk.on('uploadError', ({ taskId, error }) => {
console.error('Upload failed:', taskId, error);
});

Unit testing:

import { createMockSDK } from '@choreoai/display-sdk-types/mock';
import { describe, it, expect, vi } from 'vitest';

describe('Recording Flow', () => {
it('completes recording and emits event', async () => {
const sdk = createMockSDK({ recordingDurationMs: 100 });
const onComplete = vi.fn();

sdk.on('recordingComplete', onComplete);
await sdk.startRecording();
await new Promise(r => setTimeout(r, 150));

expect(onComplete).toHaveBeenCalledWith(
expect.objectContaining({ recordingId: expect.any(String) })
);
});
});

External apps can route logs to the parent app's Sentry and console using the SDK log methods. Logs are automatically tagged with source: 'external-app', the app URL, and the slide name.

// Convenience methods (recommended)
sdk.log.debug('Frame processed', { fps: 30 }); // Console only
sdk.log.info('Face detected', { phase: 3 }); // Console + Sentry Logs
sdk.log.warn('Model load slow', { duration: 5000 }); // Console + Sentry Logs
sdk.log.error('Camera failed', { error: 'denied' }); // Console + Sentry Logs + Sentry issue

// Direct call (equivalent)
sdk.log('info', 'Face detected', { phase: 3 });

The log method silently fails if the parent doesn't support it (backward compatible with older display versions). Logging is fire-and-forget — there's no need to await the result.

The SDK provides parent-hosted face detection powered by face-api.js. Models are loaded once in the parent's web worker and persist across iframe reloads — no model re-loading overhead and no tensor memory leaks.

Face detection must be enabled in the gallery editor for each external-app slide that needs it. The gallery config specifies which models to pre-load.

Use detectFace for person identification (Phase 2-3 in interactive experiences):

// Draw camera frame to an offscreen canvas
const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d');
ctx.drawImage(cameraVideo, 0, 0, 640, 480);

// Create ImageBitmap (transferred to parent, zero-copy)
const bitmap = await createImageBitmap(canvas);

// Run detection via parent worker
const { descriptor, box, error } = await sdk.detectFace(bitmap, {
detector: 'ssdMobilenetv1', // default
minConfidence: 0.5,
});

if (descriptor) {
// Rehydrate the 128-D face descriptor
const vec = new Float32Array(descriptor);

// Use for person matching
const distance = euclideanDistance(vec, lockedDescriptor);
}

if (box) {
// Use bounding box for face tracking / zoom
console.log('Face at:', box.x, box.y, box.width, box.height);
}

Use detectFacePresence for "is someone there?" checks (Phase 1):

const bitmap = await createImageBitmap(canvas);
const { detected, box } = await sdk.detectFacePresence(bitmap, {
detector: 'tinyFaceDetector', // default, faster
inputSize: 416, // larger = better at distance
scoreThreshold: 0.2, // lower = more sensitive
});

if (detected) {
enterPhase2();
}
Option Type Default Description
detector 'tinyFaceDetector' | 'ssdMobilenetv1' varies Detector model
inputSize number 320 TinyFaceDetector resolution (128-608)
scoreThreshold number 0.5 TinyFaceDetector confidence threshold
minConfidence number 0.5 SSD MobileNet confidence threshold

Both methods return an error field instead of throwing:

Error Meaning
SERVICE_UNAVAILABLE Face detection not enabled for this slide, or worker failed
UNSUPPORTED_MODEL Requested model not loaded in gallery config
TIMEOUT Detection took >2 seconds
NOT_READY Worker still initializing
INVALID_INPUT ImageBitmap was closed or corrupted
const result = await sdk.detectFace(bitmap);
if (result.error) {
console.warn('Detection failed:', result.error);
}

In iframe mode, the SDK returns camera constraints rather than a MediaStream (streams cannot be transferred across iframe boundaries via postMessage). Your app should call getUserMedia with the provided constraints:

const result = await sdk.getCameraStream({ width: 1280, height: 720 });

if (result instanceof MediaStream) {
// Mock/standalone mode - stream provided directly
videoElement.srcObject = result;
} else if (result.constraints) {
// Iframe mode - get stream yourself with provided constraints
const stream = await navigator.mediaDevices.getUserMedia(result.constraints);
videoElement.srcObject = stream;

// Store reference for cleanup
window._activeStream = stream;
}

When releasing the camera, stop all tracks on the local stream:

// Release local stream if you acquired it
if (window._activeStream) {
window._activeStream.getTracks().forEach(track => track.stop());
window._activeStream = null;
}

// Also notify the SDK
await sdk.releaseCamera();

For recording, the SDK manages the stream internally on the parent side. Just call the recording methods:

// Start recording (10 seconds)
await sdk.startRecording({ durationMs: 10000 });

// Listen for completion
sdk.on('recordingComplete', async ({ recordingId, duration, size }) => {
console.log('Recording saved:', recordingId);

// Play back the recording
const { url } = await sdk.getRecording(recordingId);
const previousUrl = videoElement.dataset.recordingUrl;
if (previousUrl) {
sdk.releaseUrl(previousUrl);
}
videoElement.src = url;
videoElement.dataset.recordingUrl = url;

// Or queue for upload (see Upload section below)
await sdk.queueUpload({ recordingId });
});

When a camera is mounted sideways (common with Logitech cameras in kiosks), the display startup menu allows selecting a rotation angle (0°, 90°, 180°, 270°). Logitech cameras default to 90°. The SDK exposes this angle so your app can apply a CSS rotation to its <video> element.

The cameraAngle is available from both getConfig() and getCameraStream():

// Via getConfig()
const config = await sdk.getConfig();
console.log(config.cameraAngle); // 0, 90, 180, or 270

// Via getCameraStream()
const result = await sdk.getCameraStream({ width: 1280, height: 720 });
if (!(result instanceof MediaStream)) {
console.log(result.cameraAngle); // 0, 90, 180, or 270
}

Apply a CSS transform on the <video> element to match the physical camera orientation.

Important: When rotating 90° or 270°, a plain rotate() will shrink the video to fit inside its original bounding box. You must also apply scale() to compensate for the dimension swap.

const result = await sdk.getCameraStream({ width: 1280, height: 720 });

if (!(result instanceof MediaStream)) {
const stream = await navigator.mediaDevices.getUserMedia(result.constraints);
videoElement.srcObject = stream;

const angle = result.cameraAngle || 0;
const isSwapped = angle === 90 || angle === 270;

videoElement.addEventListener('loadedmetadata', () => {
if (isSwapped) {
// Video dimensions are swapped after rotation — scale to fill container
const container = videoElement.parentElement!;
const scale = Math.min(
container.clientWidth / videoElement.videoHeight,
container.clientHeight / videoElement.videoWidth
);
videoElement.style.transform = `rotate(${angle}deg) scale(${scale})`;
videoElement.style.transformOrigin = 'center center';
} else if (angle !== 0) {
videoElement.style.transform = `rotate(${angle}deg)`;
}
});
}

Container CSS — the parent element should clip the overflow:

.camera-container {
position: relative;
width: 100%;
height: 100%;
overflow: hidden;
}

.camera-container video {
width: 100%;
height: 100%;
object-fit: cover;
}
import type { IChoreoSDK } from '@choreoai/display-sdk-types';

const sdk: IChoreoSDK = (window as any).ChoreoSDK;
await sdk.ready();

const video = document.querySelector('video')!;
const container = video.parentElement!;

// Get camera with rotation info
const result = await sdk.getCameraStream({ width: 1280, height: 720 });

if (result instanceof MediaStream) {
video.srcObject = result;
} else {
const stream = await navigator.mediaDevices.getUserMedia(result.constraints);
video.srcObject = stream;

const angle = result.cameraAngle || 0;
const isSwapped = angle === 90 || angle === 270;

video.addEventListener('loadedmetadata', () => {
if (isSwapped) {
const scale = Math.min(
container.clientWidth / video.videoHeight,
container.clientHeight / video.videoWidth
);
video.style.transform = `rotate(${angle}deg) scale(${scale})`;
video.style.transformOrigin = 'center center';
} else if (angle !== 0) {
video.style.transform = `rotate(${angle}deg)`;
}
});
}

When the camera is physically rotated (e.g. Logitech cameras mounted sideways), recordings need rotation handling. The SDK supports two approaches — choose based on your needs.

Record via sdk.startRecording() and attach the cameraAngle as upload metadata. The video file is stored unrotated; the consumer applies rotation at playback. No re-encoding, no quality loss.

const config = await sdk.getConfig();
const cameraAngle = config.cameraAngle; // 0, 90, 180, or 270

// Record using SDK (parent-side, unrotated)
await sdk.startRecording({ durationMs: 10000 });

sdk.once('recordingComplete', async ({ recordingId }) => {
const { taskId } = await sdk.queueUpload({
recordingId,
metadata: {
cameraAngle: String(cameraAngle),
capturedAt: new Date().toISOString(),
},
});
});

When to use: You control the playback viewer and can apply CSS rotation there, or the backend handles rotation in post-processing.

Record rotated video locally via a canvas and upload the blob directly. The uploaded file is correctly oriented everywhere — no consumer-side rotation needed.

const result = await sdk.getCameraStream({ width: 1280, height: 720 });
const stream = await navigator.mediaDevices.getUserMedia(result.constraints);
const angle = result.cameraAngle || 0;

// Set up canvas with rotated dimensions
const video = document.createElement('video');
video.srcObject = stream;
await video.play();

const isSwapped = angle === 90 || angle === 270;
const canvas = document.createElement('canvas');
canvas.width = isSwapped ? video.videoHeight : video.videoWidth;
canvas.height = isSwapped ? video.videoWidth : video.videoHeight;
const ctx = canvas.getContext('2d')!;
const radians = (angle * Math.PI) / 180;

// Draw rotated frames
function drawFrame() {
ctx.save();
ctx.translate(canvas.width / 2, canvas.height / 2);
ctx.rotate(radians);
ctx.drawImage(video, -video.videoWidth / 2, -video.videoHeight / 2);
ctx.restore();
requestAnimationFrame(drawFrame);
}
drawFrame();

// Record from canvas
const canvasStream = canvas.captureStream(30);
const recorder = new MediaRecorder(canvasStream, { mimeType: 'video/webm' });
const chunks: Blob[] = [];
recorder.ondataavailable = (e) => { if (e.data.size > 0) chunks.push(e.data); };

recorder.start();
setTimeout(() => recorder.stop(), 10000); // 10 seconds

recorder.onstop = async () => {
const rotatedBlob = new Blob(chunks, { type: 'video/webm' });

// Upload the pre-rotated blob via SDK
const { taskId } = await sdk.queueUpload({
blob: rotatedBlob,
metadata: {
cameraAngle: String(angle),
rotated: 'true',
},
});
};

When to use: The video must be correctly oriented when downloaded or viewed outside your app, or you want to apply mirror/filters during recording.

Metadata-Only Canvas Blob Upload
Complexity Minimal — 3 lines of code More involved — canvas + MediaRecorder setup
Video quality Original quality preserved Re-encoded through canvas
Upload latency Immediate Immediate (recorded already rotated)
Consumer requirement Must read metadata and rotate None — video is correctly oriented
Best for Internal pipelines, controlled viewers Public sharing, downloads, third-party viewers

The SDK supports background uploads that survive tab close, powered by a service worker and the Background Sync API. Uploads are queued and processed reliably in the background.

When the gallery has recording upload enabled, the simplest upload requires no configuration:

sdk.once('recordingComplete', async ({ recordingId }) => {
const { taskId } = await sdk.queueUpload({ recordingId });
console.log('Upload queued:', taskId);
});

You can attach key-value pairs to uploads. These are stored as x-amz-meta-* headers on the S3 object and persisted alongside the file. This works with any S3-compatible storage provider including AWS S3, Cloudflare R2, and Choreo Drive.

sdk.once('recordingComplete', async ({ recordingId }) => {
const { taskId } = await sdk.queueUpload({
recordingId,
metadata: {
sessionId: 'sess_abc123',
participantName: 'Jane Doe',
danceName: 'waltz',
capturedAt: new Date().toISOString(),
},
});
});

Metadata constraints:

  • Keys are lowercased automatically (e.g. sessionId becomes x-amz-meta-sessionid)
  • Values must be strings
  • S3 limits: max 2 KB total for all user-defined metadata per object
const { taskId } = await sdk.queueUpload({ recordingId });

// Poll for completion
const interval = setInterval(async () => {
const status = await sdk.getUploadStatus(taskId);

if (status.status === 'completed') {
clearInterval(interval);
console.log('Uploaded to:', status.retrievalUrl);
} else if (status.status === 'failed') {
clearInterval(interval);
console.error('Upload failed:', status.error);
}
}, 2000);
import type { IChoreoSDK } from '@choreoai/display-sdk-types';

const sdk: IChoreoSDK = (window as any).ChoreoSDK;
await sdk.ready();

// 1. Start recording
await sdk.startRecording({ durationMs: 5000 });

// 2. Wait for completion, then upload with metadata
sdk.once('recordingComplete', async ({ recordingId, duration, size }) => {
console.log(`Recorded ${duration}ms, ${size} bytes`);

// 3. Queue upload with metadata
const { taskId } = await sdk.queueUpload({
recordingId,
metadata: {
sessionId: 'my-session',
danceName: 'tango',
},
});

// 4. Wait for upload to complete
sdk.once('uploadQueued', ({ taskId: uploadedTaskId }) => {
console.log('Upload started:', uploadedTaskId);
});
});

When the gallery has recordingUpload.enabled = true, recordings are auto-uploaded on completion without metadata. To attach metadata, use explicit queueUpload() instead. Note that if auto-upload is enabled, the recording may be uploaded twice (once automatically, once via your explicit call).

Once uploaded, metadata can be retrieved using any S3-compatible client via the HeadObject operation.

aws s3api head-object \
--bucket my-bucket \
--key recordings/rec-123.webm

# Response includes:
# "Metadata": {
# "sessionid": "sess_abc123",
# "participantname": "Jane Doe",
# "dancename": "waltz"
# }

R2 is fully S3-compatible. Use the AWS CLI with R2's endpoint:

aws s3api head-object \
--endpoint-url https://<account-id>.r2.cloudflarestorage.com \
--bucket my-bucket \
--key recordings/rec-123.webm

Or via the Cloudflare dashboard under R2 → Object Details → Custom Metadata.

Choreo Drive exposes an S3-compatible API that supports custom metadata on all object operations.

# Using curl
curl -I https://<bucket>.storage.choreo.ai/recordings/rec-123.webm
# Response headers:
# x-amz-meta-sessionid: sess_abc123
# x-amz-meta-dancename: waltz

# Using AWS CLI with Choreo Drive endpoint
aws s3api head-object \
--endpoint-url https://storage.choreo.ai \
--bucket my-bucket \
--key recordings/rec-123.webm
import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3';

const client = new S3Client({
region: 'us-east-1',
endpoint: 'https://storage.choreo.ai', // or R2/S3 endpoint
credentials: {
accessKeyId: '...',
secretAccessKey: '...',
},
});

const response = await client.send(new HeadObjectCommand({
Bucket: 'my-bucket',
Key: 'recordings/rec-123.webm',
}));

console.log(response.Metadata);
// { sessionid: 'sess_abc123', dancename: 'waltz' }

Proprietary - Choreo, LLC