El iPhone toma una pantalla de realidad aumentada con AVCaptureVideoPreviewLayer

Tengo una pequeña aplicación de realidad aumentada que estoy desarrollando y me gustaría saber cómo save una captura de pantalla de lo que ve el usuario con un toque de un button o un timer.

La aplicación funciona superponiendo la alimentación de camera en vivo sobre otra UIView. Puedo save capturas de pantalla usando el button de encendido + el button de inicio, estos se guardan en el rollo de la camera. Sin embargo, Apple no mostrará el AVCaptureVideoPreviewLayer, incluso si le pido a la window que se guarde. Creará una pieza transparente de canvas donde esté la capa de vista previa.

¿Cuál es la forma correcta para que una aplicación de realidad aumentada guarde las capturas de pantalla, incluida la transparencia y las subvenciones?

//displaying a live preview on one of the views -(void)startCapture { captureSession = [[AVCaptureSession alloc] init]; AVCaptureDevice *audioCaptureDevice = nil; // AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; for (AVCaptureDevice *device in videoDevices) { if(useFrontCamera){ if (device.position == AVCaptureDevicePositionFront) { //FRONT-FACING CAMERA EXISTS audioCaptureDevice = device; break; } }else{ if (device.position == AVCaptureDevicePositionBack) { //Rear-FACING CAMERA EXISTS audioCaptureDevice = device; break; } } } NSError *error = nil; AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error]; if (audioInput) { [captureSession addInput:audioInput]; } else { // Handle the failure. } if([captureSession canAddOutput:captureOutput]){ captureOutput = [[AVCaptureVideoDataOutput alloc] init]; [captureOutput setAlwaysDiscardsLateVideoFrames:YES]; [captureOutput setSampleBufferDelegate:self queue:queue]; [captureOutput setVideoSettings:videoSettings]; dispatch_release(queue); }else{ //handle failure } previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession]; UIView *aView = arOverlayView; previewLayer.frame =CGRectMake(0,0, arOverlayView.frame.size.width,arOverlayView.frame.size.height); // Assume you want the preview layer to fill the view. [aView.layer addSublayer:previewLayer]; [captureSession startRunning]; } //ask the entire window to draw itself in a graphics context. This call will not render 

// el AVCaptureVideoPreviewLayer. Tiene que ser reemplazado con una vista basada en UIImageView o GL. // ver el siguiente código para crear una actualización dinámica UIImageView – (void) saveScreenshot {

  UIGraphicsBeginImageContext(appDelegate.window.bounds.size); [appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); UIImageWriteToSavedPhotosAlbum(screenshot, self, @selector(image:didFinishSavingWithError:contextInfo:), nil); } //image saved to camera roll callback - (void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo { // Was there an error? if (error != NULL) { // Show error message... NSLog(@"save failed"); } else // No errors { NSLog(@"save successful"); // Show message image successfully saved } } 

Aquí está el código para crear la image:

// debe agregar su controller de vista como delegado a la salida de la camera para recibir una notificación de los datos de buffereed

 -(void)activateCameraFeed { //this is the code responsible for capturing feed for still image processing dispatch_queue_t queue = dispatch_queue_create("com.AugmentedRealityGlamour.ImageCaptureQueue", NULL); captureOutput = [[AVCaptureVideoDataOutput alloc] init]; [captureOutput setAlwaysDiscardsLateVideoFrames:YES]; [captureOutput setSampleBufferDelegate:self queue:queue]; [captureOutput setVideoSettings:videoSettings]; dispatch_release(queue); //......configure audio feed, add inputs and outputs } //buffer delegate callback - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { if ( ignoreImageStream ) return; [self performImageCaptureFrom:sampleBuffer]; } 

Cree un UIImage:

 - (void) performImageCaptureFrom:(CMSampleBufferRef)sampleBuffer { CVImageBufferRef imageBuffer; if ( CMSampleBufferGetNumSamples(sampleBuffer) != 1 ) return; if ( !CMSampleBufferIsValid(sampleBuffer) ) return; if ( !CMSampleBufferDataIsReady(sampleBuffer) ) return; imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); if ( CVPixelBufferGetPixelFormatType(imageBuffer) != kCVPixelFormatType_32BGRA ) return; CVPixelBufferLockBaseAddress(imageBuffer,0); uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); CGImageRef newImage = nil; if ( cameraDeviceSetting == CameraDeviceSetting640x480 ) { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); newImage = CGBitmapContextCreateImage(newContext); CGColorSpaceRelease( colorSpace ); CGContextRelease(newContext); } else { uint8_t *tempAddress = malloc( 640 * 4 * 480 ); memcpy( tempAddress, baseAddress, bytesPerRow * height ); baseAddress = tempAddress; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst); newImage = CGBitmapContextCreateImage(newContext); CGContextRelease(newContext); newContext = CGBitmapContextCreate(baseAddress, 640, 480, 8, 640*4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGContextScaleCTM( newContext, (CGFloat)640/(CGFloat)width, (CGFloat)480/(CGFloat)height ); CGContextDrawImage(newContext, CGRectMake(0,0,640,480), newImage); CGImageRelease(newImage); newImage = CGBitmapContextCreateImage(newContext); CGColorSpaceRelease( colorSpace ); CGContextRelease(newContext); free( tempAddress ); } if ( newImage != nil ) { //modified for iOS5.0 with ARC tempImage = [[UIImage alloc] initWithCGImage:newImage scale:(CGFloat)1.0 orientation:cameraImageOrientation]; CGImageRelease(newImage); //this call creates the illusion of a preview layer, while we are actively switching images created with this method [self performSelectorOnMainThread:@selector(newCameraImageNotification:) withObject:tempImage waitUntilDone:YES]; } CVPixelBufferUnlockBaseAddress(imageBuffer,0); } 

actualice la interfaz con una UIView que realmente se puede representar en un context de charts:

 - (void) newCameraImageNotification:(UIImage*)newImage { if ( newImage == nil ) return; [arOverlayView setImage:newImage]; //or do more advanced processing of the image } 

Si desea una instantánea de lo que está en la pantalla, esto es lo que estoy haciendo en una de mis aplicaciones de camera. No he tocado este código en mucho time, por lo que podría haber una mejor manera 5.0 ahora, pero esto es sólido con más de 1 millón de descargas. Hay una function para capturar una pantalla basada en UIView y otra para capturar una pantalla Open / GLES1:

 // // ScreenCapture.m // LiveEffectsCam // // Created by John Carter on 10/8/10. // #import "ScreenCapture.h" #import <QuartzCore/CABase.h> #import <QuartzCore/CATransform3D.h> #import <QuartzCore/CALayer.h> #import <QuartzCore/CAScrollLayer.h> #import <OpenGLES/EAGL.h> #import <OpenGLES/ES1/gl.h> #import <OpenGLES/ES1/glext.h> #import <QuartzCore/QuartzCore.h> #import <OpenGLES/EAGLDrawable.h> @implementation ScreenCapture + (UIImage *) GLViewToImage:(GLView *)glView { UIImage *glImage = [GLView snapshot:glView]; // returns an autoreleased image return glImage; } + (UIImage *) GLViewToImage:(GLView *)glView withOverlayImage:(UIImage *)overlayImage { UIImage *glImage = [GLView snapshot:glView]; // returns an autoreleased image // Merge Image and Overlay // CGRect imageRect = CGRectMake((CGFloat)0.0, (CGFloat)0.0, glImage.size.width*glImage.scale, glImage.size.height*glImage.scale); CGImageRef overlayCopy = CGImageCreateCopy( overlayImage.CGImage ); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(NULL, (int)glImage.size.width*glImage.scale, (int)glImage.size.height*glImage.scale, 8, (int)glImage.size.width*4*glImage.scale, colorSpace, kCGImageAlphaPremultipliedLast); CGContextDrawImage(context, imageRect, glImage.CGImage); CGContextDrawImage(context, imageRect, overlayCopy); CGImageRef newImage = CGBitmapContextCreateImage(context); UIImage *combinedViewImage = [[[UIImage alloc] initWithCGImage:newImage] autorelease]; CGImageRelease(newImage); CGImageRelease(overlayCopy); CGContextRelease(context); CGColorSpaceRelease(colorSpace); return combinedViewImage; } + (UIImage *) UIViewToImage:(UIView *)view withOverlayImage:(UIImage *)overlayImage { UIImage *viewImage = [ScreenCapture UIViewToImage:view]; // returns an autoreleased image // Merge Image and Overlay // CGRect imageRect = CGRectMake((CGFloat)0.0, (CGFloat)0.0, viewImage.size.width*viewImage.scale, viewImage.size.height*viewImage.scale); CGImageRef overlayCopy = CGImageCreateCopy( overlayImage.CGImage ); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(NULL, (int)viewImage.size.width*viewImage.scale, (int)viewImage.size.height*viewImage.scale, 8, (int)viewImage.size.width*4*viewImage.scale, colorSpace, kCGImageAlphaPremultipliedLast); CGContextDrawImage(context, imageRect, viewImage.CGImage); CGContextDrawImage(context, imageRect, overlayCopy); CGImageRef newImage = CGBitmapContextCreateImage(context); UIImage *combinedViewImage = [[[UIImage alloc] initWithCGImage:newImage] autorelease]; CGImageRelease(newImage); CGImageRelease(overlayCopy); CGContextRelease(context); CGColorSpaceRelease(colorSpace); return combinedViewImage; } + (UIImage *) UIViewToImage:(UIView *)view { // Create a graphics context with the target size // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext // // CGSize imageSize = [[UIScreen mainScreen] bounds].size; CGSize imageSize = CGSizeMake( (CGFloat)480.0, (CGFloat)640.0 ); // camera image size if (NULL != UIGraphicsBeginImageContextWithOptions) UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0); else UIGraphicsBeginImageContext(imageSize); CGContextRef context = UIGraphicsGetCurrentContext(); // Start with the view... // CGContextSaveGState(context); CGContextTranslateCTM(context, [view center].x, [view center].y); CGContextConcatCTM(context, [view transform]); CGContextTranslateCTM(context,-[view bounds].size.width * [[view layer] anchorPoint].x,-[view bounds].size.height * [[view layer] anchorPoint].y); [[view layer] renderInContext:context]; CGContextRestoreGState(context); // ...then repeat for every subview from back to front // for (UIView *subView in [view subviews]) { if ( [subView respondsToSelector:@selector(screen)] ) if ( [(UIWindow *)subView screen] == [UIScreen mainScreen] ) continue; CGContextSaveGState(context); CGContextTranslateCTM(context, [subView center].x, [subView center].y); CGContextConcatCTM(context, [subView transform]); CGContextTranslateCTM(context,-[subView bounds].size.width * [[subView layer] anchorPoint].x,-[subView bounds].size.height * [[subView layer] anchorPoint].y); [[subView layer] renderInContext:context]; CGContextRestoreGState(context); } UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image UIGraphicsEndImageContext(); return image; } + (UIImage *) snapshot:(GLView *)eaglview { NSInteger x = 0; NSInteger y = 0; NSInteger width = [eaglview backingWidth]; NSInteger height = [eaglview backingHeight]; NSInteger dataLength = width * height * 4; NSUInteger i; for ( i=0; i<100; i++ ) { glFlush(); CFRunLoopRunInMode(kCFRunLoopDefaultMode, (float)1.0/(float)60.0, FALSE); } GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte)); // Read pixel data from the framebuffer // glPixelStorei(GL_PACK_ALIGNMENT, 4); glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data); // Create a CGImage with the pixel data // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel // otherwise, use kCGImageAlphaPremultipliedLast // CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL); CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB(); CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, true, kCGRenderingIntentDefault); // OpenGL ES measures data in PIXELS // Create a graphics context with the target size measunetworking in POINTS // NSInteger widthInPoints; NSInteger heightInPoints; if (NULL != UIGraphicsBeginImageContextWithOptions) { // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration // Set the scale parameter to your OpenGL ES view's contentScaleFactor // so that you get a high-resolution snapshot when its value is greater than 1.0 // CGFloat scale = eaglview.contentScaleFactor; widthInPoints = width / scale; heightInPoints = height / scale; UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale); } else { // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext // widthInPoints = width; heightInPoints = height; UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints)); } CGContextRef cgcontext = UIGraphicsGetCurrentContext(); // UIKit coordinate system is upside down to GL/Quartz coordinate system // Flip the CGImage by rendering it to the flipped bitmap context // The size of the destination area is measunetworking in POINTS // CGContextSetBlendMode(cgcontext, kCGBlendModeCopy); CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref); // Retrieve the UIImage from the current context UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image UIGraphicsEndImageContext(); // Clean up free(data); CFRelease(ref); CFRelease(colorspace); CGImageRelease(iref); return image; } @end