8000 Update WebRTC framework · Issue #28 · flutter-webrtc/flutter-webrtc · GitHub
[go: up one dir, main page]

Skip to content

Update WebRTC framework #28

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
rostopira opened this issue Nov 22, 2018 · 10 comments
Closed

Update WebRTC framework #28

rostopira opened this issue Nov 22, 2018 · 10 comments
Labels
😭help wanted Extra attention is needed

Comments

@rostopira
Copy link
Collaborator
rostopira commented Nov 22, 2018

Currently I'm trying to update WebRTC
Goal is to make this plugin easily updatable with rolling dependency version
I've removed WebRTC.framework in root, and using CocoaPods dependency (just set s.dependency 'GoogleWebRTC' in podspec)
However I'm stuck at updating FlutterRTCMediaStream.m

Following code not working

    if (videoDevice) {
        RTCVideoSource *videoSource = [self.peerConnectionFactory videoSource];
        
        // FIXME The effort above to find a videoDevice value which satisfies the
        // specified constraints was pretty much wasted. Salvage facingMode for
        // starters because it is kind of a common and hence important feature on
        // a mobile device.
        
        RTCCameraVideoCapturer *capt = [[RTCCameraVideoCapturer alloc] initWithDelegate:videoSource];
        AVCaptureDeviceFormat *selectedFormat = nil;
        int currentDiff = INT_MAX;
        int targetWidth = 1280;
        int targetHeight = 720;
        for (AVCaptureDeviceFormat *format in [RTCCameraVideoCapturer supportedFormatsForDevice:videoDevice]) {
            CMVideoDimensions dimension = CMVideoFormatDescriptionGetDimensions(format.formatDescription);
            FourCharCode pixelFormat = CMFormatDescriptionGetMediaSubType(format.formatDescription);
            int diff = abs(targetWidth - dimension.width) + abs(targetHeight - dimension.height);
            if (diff < currentDiff) {
                selectedFormat = format;
                currentDiff = diff;
            } else if (diff == currentDiff && pixelFormat == [capt preferredOutputPixelFormat]) {
                selectedFormat = format;
            }
        }
        //NSLog(@"test %@", [(NSString *)selectedFormat lowercaseString]);
        if (selectedFormat == nil) {
            NSLog(@"Capture format fucked up. Fallback");
            selectedFormat = [RTCCameraVideoCapturer supportedFormatsForDevice:videoDevice].firstObject;
        }
        [capt startCaptureWithDevice:videoDevice format:selectedFormat fps:30 completionHandler:^(NSError *error) {
            NSLog(@"Start capture error: %@", [error localizedDescription]);
        }];
        
        NSString *trackUUID = [[NSUUID UUID] UUIDString];
        RTCVideoTrack *videoTrack = [self.peerConnectionFactory videoTrackWithSource:videoSource trackId:trackUUID];
        [mediaStream addVideoTrack:videoTrack];
        
        successCallback(mediaStream);
    } else {
        // According to step 6.2.3 of the getUserMedia() algorithm, if there is no
        // source, fail with a new OverconstrainedError.
        errorCallback(@"OverconstrainedError", /* errorMessage */ nil);
    }
@cloudwebrtc
Copy link
Member

Hi, is there a more detailed error log?

@rostopira
Copy link
Collaborator Author
rostopira commented Nov 22, 2018

@cloudwebrtc if there were any log, I could fix it myself :D

@cloudwebrtc
Copy link
Member

@rostopira Updating to the new WebRTC takes more time, you can create a branch so we can try it together. The video renderer is also a place to modify because the WebRTC rendering interface does not have the CVPixelBufferRef output of kCVPixelFormatType_32BGRA.

@rostopira
Copy link
Collaborator Author

@cloudwebrtc yeah, I've did some research, it appears to be in YUV format
Soon I will publish what I've already done in master branch of my fork and will switch to branch to do the same for android (I'm much more android guy than ios, I think I will handle it myself)

@rostopira
Copy link
Collaborator Author

Here is my fork: https://github.com/rostopira/flutter-webrtc
I will work on updating android for now

@rostopira
Copy link
Collaborator Author
rostopira commented Nov 24, 2018

I have some progress on updating Android (=
https://github.com/rostopira/flutter-webrtc/tree/android_update_webrtc
What about creating gitter chat?

@cloudwebrtc
Copy link
Member

@rostopira Ok, I have joined gitter.

@cloudwebrtc
Copy link
Member

@rostopira
I found the problem, in the file https://github.com/rostopira/flutter-webrtc/blob/master/ios/Classes/FlutterRTCMediaStream.m#L262 RTCCameraVideoCapturer object needs to be saved, to ensure that it is not destroyed before the session is completed.

I made a patch that you can apply to the code to solve this problem.

diff --git a/ios/Classes/FlutterRTCMediaStream.m b/ios/Classes/FlutterRTCMediaStream.m
index 14086b9..e2c7d6b 100755
--- a/ios/Classes/FlutterRTCMediaStream.m
+++ b/ios/Classes/FlutterRTCMediaStream.m
@@ -259,7 +259,7 @@ typedef void (^NavigatorUserMediaSuccessCallback)(RTCMediaStream *mediaStream);
     if (videoDevice) {
         RTCVideoSource *videoSource = [self.peerConnectionFactory videoSource];
         // FIXME: Video capturer shouldn't be local to be able to stop
-        RTCCameraVideoCapturer *capt = [[RTCCameraVideoCapturer alloc] initWithDelegate:videoSource];
+        self.videoCapturer = [[RTCCameraVideoCapturer alloc] initWithDelegate:videoSource];
         AVCaptureDeviceFormat *selectedFormat = nil;
         int currentDiff = INT_MAX;
         // TODO: use values from constraints map
@@ -272,7 +272,7 @@ typedef void (^NavigatorUserMediaSuccessCallback)(RTCMediaStream *mediaStream);
             if (diff < currentDiff) {
                 selectedFormat = format;
                 currentDiff = diff;
-            } else if (diff == currentDiff && pixelFormat == [capt preferredOutputPixelFormat]) {
+            } else if (diff == currentDiff && pixelFormat == [self.videoCapturer preferredOutputPixelFormat]) {
                 selectedFormat = format;
             }
         }
@@ -280,7 +280,7 @@ typedef void (^NavigatorUserMediaSuccessCallback)(RTCMediaStream *mediaStream);
             NSLog(@"Capture format is nil. Fallback");
             selectedFormat = [RTCCameraVideoCapturer supportedFormatsForDevice:videoDevice].firstObject;
         }
-        [capt startCaptureWithDevice:videoDevice format:selectedFormat fps:30 completionHandler:^(NSError *error) {
+        [self.videoCapturer startCaptureWithDevice:videoDevice format:selectedFormat fps:30 completionHandler:^(NSError *error) {
             if (error) {
                 NSLog(@"Start capture error: %@", [error localizedDescription]);
             }
diff --git a/ios/Classes/FlutterRTCVideoRenderer.m b/ios/Classes/FlutterRTCVideoRenderer.m
index b840b88..904464c 100755
--- a/ios/Classes/FlutterRTCVideoRenderer.m
+++ b/ios/Classes/FlutterRTCVideoRenderer.m
@@ -74,6 +74,7 @@
 #pragma mark - RTCVideoRenderer methods
 - (void)renderFrame:(RTCVideoFrame *)frame {
     
+    NSLog(@"renderFrame: %dx%d", frame.width, frame.height);
     //TODO: got a frame => scale to _renderSize => convert to BGRA32 pixelBufferRef
     RTCI420Buffer *buffer = [[frame buffer] toI420];
     buffer.dataY;
diff --git a/ios/Classes/FlutterWebRTCPlugin.h b/ios/Classes/FlutterWebRTCPlugin.h
index 0c5e648..3682f7f 100644
--- a/ios/Classes/FlutterWebRTCPlugin.h
+++ b/ios/Classes/FlutterWebRTCPlugin.h
@@ -6,6 +6,7 @@
 #import <WebRTC/RTCDataChannel.h>
 #import <WebRTC/RTCDataChannelConfiguration.h>
 #import <WebRTC/RTCMediaStreamTrack.h>
+#import <WebRTC/RTCCameraVideoCapturer.h>
 
 @class FlutterRTCVideoRenderer;
 
@@ -18,6 +19,7 @@
 @property (nonatomic, strong) NSMutableDictionary<NSNumber *, FlutterRTCVideoRenderer *> *renders;
 @property (nonatomic, retain) UIViewController *viewController;/*for broadcast or ReplayKit */
 @property (nonatomic, strong) NSObject<FlutterBinaryMessenger>* messenger;
+@property (nonatomic, strong) RTCCameraVideoCapturer *videoCapturer;
 
 - (RTCMediaStream*)streamForId:(NSString*)streamId;
 
diff --git a/ios/Classes/FlutterWebRTCPlugin.m b/ios/Classes/FlutterWebRTCPlugin.m
index 6d84a22..1d7b01f 100644
--- a/ios/Classes/FlutterWebRTCPlugin.m
+++ b/ios/Classes/FlutterWebRTCPlugin.m
@@ -257,7 +257,11 @@
             for (RTCVideoTrack *track in stream.videoTracks) {
                 [self.localTracks removeObjectForKey:track.trackId];
                 RTCVideoTrack *videoTrack = (RTCVideoTrack *)track;
-                //TODO(rostopira)
+                RTCVideoSource *source = videoTrack.source;
+                if(source){
+                    [self.videoCapturer stopCapture];
+                    self.videoCapturer = nil;
+                }
             }
             for (RTCAudioTrack *track in stream.audioTracks) {
                 [self.localTracks removeObjectForKey:track.trackId];

@cloudwebrtc
Copy link
Member

@rostopira
I created an ios_update_webrtc branch from your master and fixed VideoCapturer and VideoRenderer, other issues I am working on.

@cloudwebrtc cloudwebrtc added 🚀enhancement New feature or request 😭help wanted Extra attention is needed and removed 🚀enhancement New feature or request labels Dec 2, 2018
@MegaManX32
Copy link

Hey @cloudwebrtc, I am currently using latest webRTC on iOS, and documentation is really sparce, even non existing. I just wanted to thank you, your comment about video capturer not being retained solved my problems also

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
😭help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants
0