Skip to content

feat: add Fishjam background blur integration#1080

Open
chmjkb wants to merge 23 commits intomainfrom
@chmjkb/webrtc-integration
Open

feat: add Fishjam background blur integration#1080
chmjkb wants to merge 23 commits intomainfrom
@chmjkb/webrtc-integration

Conversation

@chmjkb
Copy link
Copy Markdown
Collaborator

@chmjkb chmjkb commented Apr 17, 2026

Description

Adds react-native-executorch-webrtc package for real-time background blur in Fishjam WebRTC video calls using on-device ExecuTorch segmentation models.

Key features:

  • useBackgroundBlur hook providing blurMiddleware for Fishjam's useCamera
  • blur compositing (OpenGL ES on Android, Core Image on iOS)
  • Morphological mask cleaning + EMA temporal smoothing (C++/OpenCV)

Architecture:

  • Reuses BaseSemanticSegmentation from react-native-executorch for inference
  • Registers custom VideoFrameProcessor with Fishjam's WebRTC pipeline
  • All heavy processing in native (C++/Objective-C++) for performance

Introduces a breaking change?

  • Yes
  • No

Type of change

  • Bug fix (change which fixes an issue)
  • New feature (change which adds functionality)
  • Documentation update (improves or adds clarity to existing documentation)
  • Other (chores, tests, code style improvements etc.)

Tested on

  • iOS
  • Android

Testing instructions

You'll need to setup your fishjam account, and verify this example works properly:

import { StatusBar } from 'expo-status-bar';
import { StyleSheet, Text, View, TouchableOpacity } from 'react-native';
import {
  FishjamProvider,
  useConnection,
  useCamera,
  useInitializeDevices,
  useSandbox,
  RTCView,
} from '@fishjam-cloud/react-native-client';
import { useBackgroundBlur } from 'react-native-executorch-webrtc';
import { ResourceFetcher, SELFIE_SEGMENTATION, initExecutorch } from 'react-native-executorch';
import { ExpoResourceFetcher } from 'react-native-executorch-expo-resource-fetcher';

initExecutorch({ resourceFetcher: ExpoResourceFetcher });

const FISHJAM_ID = 'your-id';

function CameraScreen() {
  const { initializeDevices } = useInitializeDevices();
  const { cameraStream, cameraDevices, currentCamera, selectCamera, setCameraTrackMiddleware } = useCamera();
  const { joinRoom, leaveRoom, peerStatus } = useConnection();
  const { getSandboxPeerToken } = useSandbox();
  const [isJoining, setIsJoining] = useState(false);
  const [modelPath, setModelPath] = useState<string | null>(null);
  const [downloadProgress, setDownloadProgress] = useState(0);
  const [blurEnabled, setBlurEnabled] = useState(false);

  const { blurMiddleware } = useBackgroundBlur({
    modelUri: modelPath || '',
    blurRadius: 12,
  });

  // Download the selfie segmentation model
  useEffect(() => {
    const downloadModel = async () => {
      try {
        const paths = await ResourceFetcher.fetch(
          (progress) => setDownloadProgress(progress),
          SELFIE_SEGMENTATION.modelSource
        );
        if (paths?.[0]) {
          setModelPath(paths[0]);
        }
      } catch (error) {
        console.error('Failed to download model:', error);
      }
    };
    downloadModel();
  }, []);

  const handleFlipCamera = async () => {
    if (cameraDevices.length < 2) return;
    const currentIndex = cameraDevices.findIndex(
      (device) => device.deviceId === currentCamera?.deviceId
    );
    const nextIndex = (currentIndex + 1) % cameraDevices.length;
    await selectCamera(cameraDevices[nextIndex].deviceId);
  };

  const handleToggleBlur = async () => {
    if (!modelPath) return;
    if (blurEnabled) {
      await setCameraTrackMiddleware(null);
      setBlurEnabled(false);
    } else {
      await setCameraTrackMiddleware(blurMiddleware);
      setBlurEnabled(true);
    }
  };

  useEffect(() => {
    initializeDevices();
  }, []);

  const handleJoinRoom = async () => {
    setIsJoining(true);
    try {
      const roomName = 'demo-room';
      const peerName = `user_${Date.now()}`;
      const peerToken = await getSandboxPeerToken(roomName, peerName);
      await joinRoom({ peerToken });
    } catch (error) {
      console.error('Failed to join room:', error);
    } finally {
      setIsJoining(false);
    }
  };

  return (
    <View style={styles.container}>
      <StatusBar style="light" />

      <View style={styles.videoContainer}>
        {cameraStream ? (
          <RTCView
            mediaStream={cameraStream}
            style={styles.video}
            objectFit="cover"
            mirror={true}
          />
        ) : (
          <View style={styles.placeholder}>
            <Text style={styles.placeholderText}>Starting camera...</Text>
          </View>
        )}
      </View>

      <View style={styles.controls}>
        <Text style={styles.status}>Status: {peerStatus}</Text>
        {downloadProgress > 0 && downloadProgress < 1 && (
          <Text style={styles.status}>
            Downloading model: {(downloadProgress * 100).toFixed(0)}%
          </Text>
        )}
        <View style={styles.buttons}>
          <TouchableOpacity style={styles.flipButton} onPress={handleFlipCamera}>
            <Text style={styles.buttonText}>Flip</Text>
          </TouchableOpacity>
          <TouchableOpacity
            style={[styles.blurButton, blurEnabled && styles.blurButtonActive, !modelPath && styles.buttonDisabled]}
            onPress={handleToggleBlur}
            disabled={!modelPath}
          >
            <Text style={styles.buttonText}>{blurEnabled ? 'Blur On' : 'Blur'}</Text>
          </TouchableOpacity>
          {peerStatus === 'connected' ? (
            <TouchableOpacity style={styles.leaveButton} onPress={leaveRoom}>
              <Text style={styles.buttonText}>Leave Room</Text>
            </TouchableOpacity>
          ) : (
            <TouchableOpacity
              style={[styles.button, isJoining && styles.buttonDisabled]}
              onPress={handleJoinRoom}
              disabled={isJoining}
            >
              <Text style={styles.buttonText}>
                {isJoining ? 'Joining...' : 'Join Room'}
              </Text>
            </TouchableOpacity>
          )}
        </View>
      </View>
    </View>
  );
}

export default function App() {
  return (
    <FishjamProvider fishjamId={FISHJAM_ID}>
      <CameraScreen />
    </FishjamProvider>
  );
}

const styles = StyleSheet.create({
  container: {
    flex: 1,
    backgroundColor: '#000',
  },
  videoContainer: {
    flex: 1,
  },
  video: {
    flex: 1,
  },
  placeholder: {
    flex: 1,
    alignItems: 'center',
    justifyContent: 'center',
    backgroundColor: '#1a1a1a',
  },
  placeholderText: {
    color: '#666',
    fontSize: 18,
  },
  controls: {
    padding: 20,
    paddingBottom: 40,
    alignItems: 'center',
    gap: 12,
  },
  status: {
    color: '#888',
    fontSize: 14,
  },
  buttons: {
    flexDirection: 'row',
    gap: 12,
  },
  button: {
    backgroundColor: '#007AFF',
    paddingHorizontal: 32,
    paddingVertical: 14,
    borderRadius: 12,
  },
  flipButton: {
    backgroundColor: '#333',
    paddingHorizontal: 24,
    paddingVertical: 14,
    borderRadius: 12,
  },
  leaveButton: {
    backgroundColor: '#FF3B30',
    paddingHorizontal: 32,
    paddingVertical: 14,
    borderRadius: 12,
  },
  blurButton: {
    backgroundColor: '#5856D6',
    paddingHorizontal: 24,
    paddingVertical: 14,
    borderRadius: 12,
  },
  blurButtonActive: {
    backgroundColor: '#34C759',
  },
  buttonDisabled: {
    backgroundColor: '#444',
  },
  buttonText: {
    color: '#fff',
    fontSize: 16,
    fontWeight: '600',
  },
});

Screenshots

Related issues

Checklist

  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have updated the documentation accordingly
  • My changes generate no new warnings

Additional notes

@chmjkb chmjkb marked this pull request as ready for review April 20, 2026 13:15
@chmjkb chmjkb requested a review from mkopcins April 20, 2026 13:15
@chmjkb chmjkb linked an issue Apr 21, 2026 that may be closed by this pull request
@msluszniak msluszniak added the feature PRs that implement a new feature label Apr 22, 2026
msluszniak

This comment was marked as resolved.

Comment on lines 20 to +35
@@ -26,7 +27,12 @@
"backgroundColor": "#ffffff"
},
"package": "com.anonymous.computervision",
"permissions": ["android.permission.CAMERA"]
"permissions": [
"android.permission.CAMERA",
"android.permission.INTERNET",
"android.permission.RECORD_AUDIO",
"android.permission.ACCESS_NETWORK_STATE"
]
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we have these changes?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't remember if all of those are needed, but I had some issues with webrtc without those.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wowow, wait a minute, what computer vision app.json file has to do with the webrtc, completely separate package?

msluszniak

This comment was marked as resolved.

@chmjkb chmjkb requested a review from msluszniak May 4, 2026 11:52
Copy link
Copy Markdown
Member

@msluszniak msluszniak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remaining items:

  • EMA state race + process-global model state — line on FrameProcessorBridge.cpp.
  • iOS sibling races + singleton design — line on ExecutorchFrameProcessor.mm.
  • Monorepo header path — line on the podspec.

Also — should we add a section about this package to the documentation?


// Mask post-processing state (EMA temporal smoothing). Touched only on the
// capture thread; unload resets it.
cv::Mat g_previousMask;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LoadedModel+mutex+shared_ptr snapshot doesn't cover g_previousMask / g_hasHistory. Capture thread reads/writes them at L158–165; unloadModel resets them at L216–217 from a different thread — cv::Mat isn't thread-safe for concurrent release/assignment, so an in-flight frame races with release(). The L41–42 "Touched only on the capture thread" comment contradicts itself.

Fix: hoist to a per-instance native handle (loadModel returns a jlong, others take it back) with the EMA state inside LoadedModel so the existing snapshot covers it.

});
}

- (void)setBlurRadius:(float)blurRadius {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Several cross-thread accesses without synchronization, all rooted in +sharedInstance:

  • _blurRadius (L122 vs L242): plain ivar across JS/capture threads. std::atomic<float> is the one-liner. Android handles this via @Volatile pendingBlurRadius.
  • _isProcessing (L165–177): plain BOOL, safe only because WebRTC happens to deliver frames serially.
  • _lastProcessedFrame: raw ivar — concurrent ARC stores from capture queue (L181) vs unload nil-out (L136) can over-release.
  • _outputPool: released in unload (L139) while capture queue may be reading it. UAF window.
  • _previousMask — see Android comment.

All collapse when state moves off +sharedInstance onto per-instance ivars with a serial queue (or os_unfair_lock). The singleton also blocks multi-track / camera-switch.

# react-native-executorch exposes rnexecutorch/* headers via its header_dir.
# However, executorch SDK headers and internal headers don't propagate to
# dependent pods, so we need to add them here.
rne_path = '${PODS_ROOT}/../../node_modules/react-native-executorch'
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

${PODS_ROOT}/../../node_modules/react-native-executorch assumes a flat layout. In yarn/pnpm workspaces node_modules hoists to the workspace root, so the path resolves to nowhere and the executorch SDK headers aren't found — breaks any workspace consumer, including this repo's own apps.

Two options:

rne_path = ['../node_modules/react-native-executorch',
            '../../node_modules/react-native-executorch',
            '../../../node_modules/react-native-executorch']
  .map { |p| File.expand_path(p, __dir__) }
  .find { |p| File.exist?(p) }

Or better, expose third-party/include/ and common/ from react-native-executorch.podspec as public_header_files, removing the need for any HEADER_SEARCH_PATHS workaround here. The L18–20 comment already admits this is the design issue.

@msluszniak
Copy link
Copy Markdown
Member

msluszniak commented May 4, 2026

Also please resolve conflicts. I tested this PR on Android and it worked well. I will be glad if someone test it on iOS as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature PRs that implement a new feature

Projects

None yet

Development

Successfully merging this pull request may close these issues.

React Native WebRTC integration

2 participants