>

这里是对这篇文章的学习,需要初始化捕捉会话

- 编辑:澳门博发娱乐官网 -

这里是对这篇文章的学习,需要初始化捕捉会话

看到这个方法应该很熟悉了吧,和拍摄照片的初始化话几乎没有区别。需要注意的是第6步sessionPreset设为了AVCaptureSessionPreset1280x720,也就是输出720p的视频。通过查询文档,还有很多不同分辨率可供选择。还记得开始和结束会话的两个方法吗?�startRunning和stopRunning,真是太好记了:

iOS-Camera Capture (iOS 镜头采集).png

AVCaptureDeviceInput

概念:
输入捕获器,AVCaptureInput的一个子类,它提供了从AVCaptureDevice捕获的媒体数据。
作用:
是AVCaptureSession的输入源,它提供设备连接到系统的媒体数据,用AVCaptureDevice呈现的那些数据,即调用所有的输入硬件,并捕获输入的媒体数据。例如摄像头和麦克风
所谓的输入硬件:比如,摄像头可以捕获外面的图像/视频输入到手机;麦克风可以捕获外部的声音输入到手机

      本文Demo地址 

使用AVFoundation框架实现拍照和视频录制的一般步骤

  1. 创建AVCaptureSession对象
  2. 使用AVCaptureDevice静态方法获得所需的设备,例如拍照和视频就需要获取摄像头设备,录音就需要获取话筒设备
  3. 利用输入设备AVCaptureDevice创建AVCaptureDeviceInput对象
  4. 初始化数据输出管理对象,如果要拍照就初始化AVCaptureStillImageOutput对象,如果要录制视频就初始化AVCaptureMovieFileOutput对象
  5. 将数据输入对象(AVCaptureDeviceInput)、数据输出对象(AVCaptureOutput)添加进媒体会话AVCaptureSession中
  6. 创建视频预览图层AVCaptureVideoPreviewLayer并指定媒体会话,添加图层到显示容器中,并调用AVCaptureSession的startRunning方法开始捕捉
  7. 将捕获的音频或者视频保存到指定文件

Demo地址:

AVCaptureOutput (电力)

AVCaptureOut:数据输出管理,通过 AVCaptureSession 中输出。可以通过相关的协议实对应的数据输出。
子类:AVCaptureFileOutputAVCapturePhotoOutputAVCaptureStillImageOutputAVCaptureVideoDataOutputAVCaptureAudioDataOutputAVCaptureAudioPreviewOutputAVCaptureDepthDataOutputAVCaptureMetadataOutput

AVCaptureDevice

概念:
捕获设备,通常是前置摄像头,后置摄像头,麦克风(音频输入), 一个AVCaptureDevice实例,代表一种设备
创建方式:
AVCaptureDevice实例不能被直接创建,因为设备本身已经存在了,我们是要拿到他而已
有三种拿到方式:
第一种:根据相机位置拿到对应的摄像头

设备位置枚举类型
 AVCaptureDevicePosition有3种(只针对摄像头,不针对麦克风):
 AVCaptureDevicePositionUnspecified = 0,//不确定
 AVCaptureDevicePositionBack        = 1,后置
 AVCaptureDevicePositionFront       = 2,前置
设备种类:
//AVCaptureDeviceType 设备种类
 AVCaptureDeviceTypeBuiltInMicrophone //麦克风
 AVCaptureDeviceTypeBuiltInWideAngleCamera //广角相机
 AVCaptureDeviceTypeBuiltInTelephotoCamera //比广角相机更长的焦距。只有使用 AVCaptureDeviceDiscoverySession 可以使用
 AVCaptureDeviceTypeBuiltInDualCamera //变焦的相机,可以实现广角和变焦的自动切换。使用同AVCaptureDeviceTypeBuiltInTelephotoCamera 一样。
 AVCaptureDeviceTypeBuiltInDuoCamera //iOS 10.2 被 AVCaptureDeviceTypeBuiltInDualCamera 替换
方法如下:
 self.device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInDuoCamera mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionBack];

第二种:第二种:直接根据媒体类型直接拿到后置摄像头

self.device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

第三种:先拿到所有的摄像头设备,然后根据前后位置拿到前后置摄像头

 - (AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition)position{
 //1.获取到当前所有可用的设备
 iOS<10.0(但是我试了下,好像还能用)
 NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
 ios>10.0
 AVCaptureDeviceDiscoverySession *deviceSession = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:@[AVCaptureDeviceTypeBuiltInWideAngleCamera,AVCaptureDeviceTypeBuiltInDualCamera] mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionUnspecified];
 NSArray *devices = deviceSession.devices;
 for ( AVCaptureDevice *device in devices )
 if ( device.position == position ){
 return device;
 }
 return nil;
 }

作用:
提供媒体数据到会话(AVCaptureSession)中,通过这个设备(AVCaptureDevice)创建一个(AVCaptureDeviceInput)捕捉设备输入器,再添加到那个捕捉会话中

      这段话也反应出了第一种方式的缺点!然后在我看这类资料的时候,又看到这样一段话:

录制视频

录制视频跟拍照大致差不多,比拍照要多一个麦克风的数据输入管理对象(麦克风输入源),拍照的话是创建AVCaptureStillImageOutput类型的输出缘,录制视频的话是AVCaptureMovieFileOutput类型

- (void)viewWillAppear:(BOOL)animated {
    [super viewWillAppear:animated];
    //初始化捕捉会话
    _session = [[AVCaptureSession alloc] init];
    if ([_session canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
        [_session setSessionPreset:AVCaptureSessionPreset1280x720];
    }

    //获得相机输入设备
    AVCaptureDevice *cameraDevice = [self getCameraDeviceWithPosition:AVCaptureDevicePositionBack];
    if (cameraDevice == nil) {
        NSLog(@"获取后置摄像头失败");
        return;
    }

    //根据相机输入设备创建相机输入源
    NSError *error = nil;
    _cameraDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&error];
    if (error) {
        NSLog(@"%@", error.localizedDescription);
        return;
    }

    //获得话筒输入设备
    AVCaptureDevice *audioDevice = [AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio].firstObject;
    //创建话筒输入源
    _audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];

    //创建数据输出管理对象
    _movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];

    //添加输入源
    if ([_session canAddInput:_cameraDeviceInput]) {
        [_session addInput:_cameraDeviceInput];
    }
    AVCaptureConnection *connection = [_movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
    if ([connection isVideoStabilizationSupported]) {

        connection.preferredVideoStabilizationMode=AVCaptureVideoStabilizationModeAuto;//通过将preferredVideoStabilizationMode属性设置为AVCaptureVideoStabilizationModeOff以外的值,当模式可用时,流经接收器的视频会稳定
    }

    if ([_session canAddInput:_audioDeviceInput]) {
        [_session addInput:_audioDeviceInput];
    }

    //添加输出源
    if ([_session canAddOutput:_movieFileOutput]) {
        [_session addOutput:_movieFileOutput];
    }
    //创建预览图层
    _previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_session];
    _previewLayer.frame = self.view.bounds;
    _previewLayer.videoGravity=AVLayerVideoGravityResizeAspectFill;//填充模式

    [self.view.layer insertSublayer:_previewLayer atIndex:0];

}

开始录制视频

- (IBAction)startRecording:(UIButton *)sender {

   if ([self.movieFileOutput isRecording]) {
       [self.movieFileOutput stopRecording];
       sender.selected = YES;
       return;
   }

   //获得连接
   AVCaptureConnection *connection = [self.movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
   connection.videoOrientation = self.previewLayer.connection.videoOrientation;

   NSString *filePath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES).lastObject stringByAppendingPathComponent:@"myVideo.mp4"];
   NSURL *url = [NSURL fileURLWithPath:filePath];
   //开始录制 并设置代理
   [self.movieFileOutput startRecordingToOutputFileURL:url recordingDelegate:self];
}

遵守AVCaptureFileOutputRecordingDelegate协议并实现代理方法

下面这个方法是必须实现的

#pragma mark - AVCaptureFileOutputRecordingDelegate
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error
{
    NSLog(@"视频录制完成");
    //将视频存入到相簿
    ALAssetsLibrary *assetsLibrary=[[ALAssetsLibrary alloc]init];
    [assetsLibrary writeVideoAtPathToSavedPhotosAlbum:outputFileURL completionBlock:^(NSURL *assetURL, NSError *error) {
        if (error) {
            NSLog(@"保存视频到相簿过程中发生错误,错误信息:%@",error.localizedDescription);
        }

        NSLog(@"成功保存视频到相簿.");
    }];
}

官方文档是这么介绍该方法的

This method is called when the file output has finished writing all data to a file whose recording was stopped, either because startRecordingToOutputFileURL:recordingDelegate: or stopRecording were called, or because an error, described by the error parameter, occurred (if no error occurred, the error parameter will be nil). This method will always be called for each recording request, even if no data is successfully written to the file.

大致意思是

当文件输出已完成将所有数据写入记录已停止的文件时,或者因为调用了startRecordingToOutputFileURL:recordingDelegate:或stopRecording,或者因为发生了由错误参数描述的错误(如果没有发生错误, 错误参数将为nil)。 即使没有数据成功写入文件,也会为每个记录请求调用此方法。

开始录制时的代理方法

- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didStartRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections
{
    NSLog(@"开始录制视频");
}

切换摄像头的方法

- (IBAction)switchCamera:(id)sender {
    //1.获得原来的输入设备
    AVCaptureDevice *oldCaptureDevice = [self.cameraDeviceInput device];
    //2.移除原来输入设备的通知
    [self removeNotificationFromDevice:oldCaptureDevice];
    //3.获得现在的输入设备
    //3.1.获得原来的设备的位置
    AVCaptureDevicePosition oldPosition = oldCaptureDevice.position;
    //3.2.获得现在的设备的位置
    AVCaptureDevicePosition currentPosition = AVCaptureDevicePositionFront;
    if (oldPosition == AVCaptureDevicePositionFront || oldPosition == AVCaptureDevicePositionUnspecified) {
        currentPosition = AVCaptureDevicePositionBack;
    }
    //3.3.根据位置创建当前的输入设备
    AVCaptureDevice *currnetCaptureDevice = [self getCameraDeviceWithPosition:currentPosition];
    //3.4.给当前设备添加通知
    [self addNotificationToCaptureDevice:currnetCaptureDevice];

    //4.根据现在的输入设备创建输入源
    NSError *error = nil;
    AVCaptureDeviceInput *currentInput = [AVCaptureDeviceInput deviceInputWithDevice:currnetCaptureDevice error:&error];
    if (error) {
        NSLog(@"%@",error.localizedDescription);
        return;
    }

    //5.更换输入源
    //5.1.开启设置
    [_session beginConfiguration];
    //5.2.移除原来的输入源
    [_session removeInput:self.cameraDeviceInput];
    //5.3.添加现在的输入源
    if ([_session canAddInput:currentInput]) {
        [_session addInput:currentInput];
        self.cameraDeviceInput = currentInput;
    }
    //5.4.提交设置
    [_session commitConfiguration];
}

可以看出跟拍照的切换是一致的

大致效果

澳门博发娱乐官网 1

视频.gif

还有一点需要注意的是因为这里调用了相机和话筒设备,所以需要在info.plist中添加相应的字段

澳门博发娱乐官网 2

CCF99CA8-86F7-4721-A89C-9A3F7986CD74.png

代码地址:点击这里

- setupCaptureSession { // 1.获取视频设备 NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; for (AVCaptureDevice *device in devices) { if (device.position == AVCaptureDevicePositionBack) { self.videoDevice = device; break; } } // 2.获取音频设备 self.audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio]; // 3.创建视频输入 NSError *error = nil; self.videoInput = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error]; if  { return; } // 4.创建音频输入 self.audioInput = [AVCaptureDeviceInput deviceInputWithDevice:self.audioDevice error:&error]; if  { return; } // 5.创建视频输出 self.movieFileOutput = [[AVCaptureMovieFileOutput alloc] init]; // 6.建立会话 self.captureSession = [[AVCaptureSession alloc] init]; self.captureSession.sessionPreset = AVCaptureSessionPreset1280x720; if ([self.captureSession canAddInput:self.videoInput]) { [self.captureSession addInput:self.videoInput]; } if ([self.captureSession canAddInput:self.audioInput]) { [self.captureSession addInput:self.audioInput]; } if ([self.captureSession canAddOutput:self.movieFileOutput]) { [self.captureSession addOutput:self.movieFileOutput]; } // 7.预览画面 self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession]; [self.previewView.layer addSublayer:self.previewLayer];}
AVCaptureDevice (发电设备)

AVCaptureDevice:输入设备,包括:摄像头和麦克风。可以通过设置一些参数来调节设备采集效果(例如:曝光,白平衡和聚焦等)。

捕捉类:

就是AVFoundation框架中带Capture(捕捉)字样的所有类,这些类很强大,可以实现录视频/语音/拍照功能
根据苹果官方文档常见的捕捉类的关系是这样的,如下图:

澳门博发娱乐官网 3

苹果官方图片1.png

下面就一一讲述一下这几个类:

 

拍照的方法

#pragma mark -拍照
- (IBAction)takePhoto:(id)sender {
    //1.根据数据输出管理对象(输出源)获得链接
    AVCaptureConnection *connection = [self.imageOutput connectionWithMediaType:AVMediaTypeVideo];
    //2.根据连接取得输出数据
    [self.imageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
        //获取图像数据
        NSData *imageData=[AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
        UIImage *image=[UIImage imageWithData:imageData];
        //存入相册
        UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
    }];
}

touch up时,调用stopRecord方法:

总体介绍

在整个镜头采集过程小编把整个过程比作我们常见发电过程:均是通过特定的步骤实现得到最后的产物。下面

本文由胜博发-编程发布,转载请注明来源:这里是对这篇文章的学习,需要初始化捕捉会话