Comment exporter UIImage array comme film?
j'ai un sérieux problème: j'ai un NSArray
avec plusieurs UIImage
des objets. Ce que je veux maintenant, c'est créer un film à partir de ces UIImages
. Mais je n'ai aucune idée de comment le faire.
j'espère que quelqu'un peut m'aider ou de m'envoyer un extrait de code qui fait quelque chose comme je veux.
Edit: Pour référence future - Après application de la solution, si la vidéo est déformée, assurez-vous que la largeur de la images / zone que vous capturez est un multiple de 16. Trouvé après de nombreuses heures de lutte ici:
Pourquoi mon film D'UIImages est déformé?
Voici la solution complète (il suffit de s'assurer que la largeur est multiple de 16)
http://codethink.no-ip.org/wordpress/archives/673
10 réponses
regardez AVAssetWriter et le reste de la AVFoundation cadre . L'auteur a une entrée de type AVAssetWriterInput , qui à son tour a une méthode appelée appendice samplebuffer: qui vous permet d'ajouter des cadres individuels à un flux vidéo. Essentiellement, vous devrez:
1) Télégraphiez à l'auteur:
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:640], AVVideoWidthKey,
[NSNumber numberWithInt:480], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain]; //retain should be removed if ARC
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
2) démarrer une session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:…] //use kCMTimeZero if unsure
3) écrivez quelques exemples:
// Or you can use AVAssetWriterInputPixelBufferAdaptor.
// That lets you feed the writer input data from a CVPixelBuffer
// that’s quite easy to create from a CGImage.
[writerInput appendSampleBuffer:sampleBuffer];
4) Terminer la session:
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:…]; //optional can call finishWriting without specifiying endTime
[videoWriter finishWriting]; //deprecated in ios6
/*
[videoWriter finishWritingWithCompletionHandler:...]; //ios 6.0+
*/
vous aurez encore à remplir-dans beaucoup de blancs, mais je pense que la seule partie qui reste vraiment difficile est d'obtenir un tampon de pixel à partir d'un CGImage
:
- (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
frameSize.height, 8, 4*frameSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, frameTransform);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
frameSize
est un CGSize
décrivant la taille de votre cadre cible et frameTransform
est un CGAffineTransform
qui vous permet de transformer les images lorsque vous les dessinez dans des cadres.
voici le dernier code de travail sur iOS8 dans L'objectif C.
nous avons dû faire une variété de modifications à la réponse de @Zoul ci-dessus pour qu'elle fonctionne sur la dernière version de Xcode et iOS8. Voici notre code de travail complet qui prend un tableau D'UIImages, en fait un .mov fichier, l'enregistrer dans un répertoire temporaire, puis le déplace à la pellicule. Nous avons assemblé le code à partir de plusieurs postes différents pour obtenir ce fonctionnement. Nous avons mis en évidence l' nous avons dû résoudre des pièges pour que le code fonctionne dans nos commentaires.
(1) Créer une collection D'UIImages
[self saveMovieToLibrary]
- (IBAction)saveMovieToLibrary
{
// You just need the height and width of the video here
// For us, our input and output video was 640 height x 480 width
// which is what we get from the iOS front camera
ATHSingleton *singleton = [ATHSingleton singletons];
int height = singleton.screenHeight;
int width = singleton.screenWidth;
// You can save a .mov or a .mp4 file
//NSString *fileNameOut = @"temp.mp4";
NSString *fileNameOut = @"temp.mov";
// We chose to save in the tmp/ directory on the device initially
NSString *directoryOut = @"tmp/";
NSString *outFile = [NSString stringWithFormat:@"%@%@",directoryOut,fileNameOut];
NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:outFile]];
NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), fileNameOut]];
// WARNING: AVAssetWriter does not overwrite files for us, so remove the destination file if it already exists
NSFileManager *fileManager = [NSFileManager defaultManager];
[fileManager removeItemAtPath:[videoTempURL path] error:NULL];
// Create your own array of UIImages
NSMutableArray *images = [NSMutableArray array];
for (int i=0; i<singleton.numberOfScreenshots; i++)
{
// This was our routine that returned a UIImage. Just use your own.
UIImage *image =[self uiimageFromCopyOfPixelBuffersUsingIndex:i];
// We used a routine to write text onto every image
// so we could validate the images were actually being written when testing. This was it below.
image = [self writeToImage:image Text:[NSString stringWithFormat:@"%i",i ]];
[images addObject:image];
}
// If you just want to manually add a few images - here is code you can uncomment
// NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:@"Documents/movie.mp4"]];
// NSArray *images = [[NSArray alloc] initWithObjects:
// [UIImage imageNamed:@"add_ar.png"],
// [UIImage imageNamed:@"add_ja.png"],
// [UIImage imageNamed:@"add_ru.png"],
// [UIImage imageNamed:@"add_ru.png"],
// [UIImage imageNamed:@"add_ar.png"],
// [UIImage imageNamed:@"add_ja.png"],
// [UIImage imageNamed:@"add_ru.png"],
// [UIImage imageNamed:@"add_ar.png"],
// [UIImage imageNamed:@"add_en.png"], nil];
[self writeImageAsMovie:images toPath:path size:CGSizeMake(height, width)];
}
C'est la méthode principale qui crée votre AssetWriter et y ajoute des images pour l'écriture.
(2) Fils d'un AVAssetWriter
-(void)writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size
{
NSError *error = nil;
// FIRST, start up an AVAssetWriter instance to write your video
// Give it a destination path (for us: tmp/temp.mov)
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
(3) démarrer une session d'écriture (NOTE: la méthode continue de ci-dessus)
//Start a SESSION of writing.
// After you start a session, you will keep adding image frames
// until you are complete - then you will tell it you are done.
[videoWriter startWriting];
// This starts your video at time = 0
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
// This was just our utility class to get screen sizes etc.
ATHSingleton *singleton = [ATHSingleton singletons];
int i = 0;
while (1)
{
// Check if the writer is ready for more data, if not, just wait
if(writerInput.readyForMoreMediaData){
CMTime frameTime = CMTimeMake(150, 600);
// CMTime = Value and Timescale.
// Timescale = the number of tics per second you want
// Value is the number of tics
// For us - each frame we add will be 1/4th of a second
// Apple recommend 600 tics per second for video because it is a
// multiple of the standard video rates 24, 30, 60 fps etc.
CMTime lastTime=CMTimeMake(i*150, 600);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
if (i == 0) {presentTime = CMTimeMake(0, 600);}
// This ensures the first frame starts at 0.
if (i >= [array count])
{
buffer = NULL;
}
else
{
// This command grabs the next UIImage and converts it to a CGImage
buffer = [self pixelBufferFromCGImage:[[array objectAtIndex:i] CGImage]];
}
if (buffer)
{
// Give the CGImage to the AVAssetWriter to add to your video
[adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
i++;
}
else
{
(4) Terminer la Session (Note: la méthode continue d'en haut)
//Finish the session:
// This is important to be done exactly in this order
[writerInput markAsFinished];
// WARNING: finishWriting in the solution above is deprecated.
// You now need to give a completion handler.
[videoWriter finishWritingWithCompletionHandler:^{
NSLog(@"Finished writing...checking completion status...");
if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted)
{
NSLog(@"Video writing succeeded.");
// Move video to camera roll
// NOTE: You cannot write directly to the camera roll.
// You must first write to an iOS directory then move it!
NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@", path]];
[self saveToCameraRoll:videoTempURL];
} else
{
NSLog(@"Video writing failed: %@", videoWriter.error);
}
}]; // end videoWriter finishWriting Block
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
NSLog (@"Done");
break;
}
}
}
}
(5) Convertissez vos UIImages en CVPixelBufferRef
Cette méthode vous donnera une référence de tampon de pixels de CV qui est nécessaire à L'auteur D'Assets. Ceci est obtenu à partir D'un Cgmimageref que vous obtenez à partir de votre UIImage (ci-dessus).
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
// This again was just our utility class for the height & width of the
// incoming video (640 height x 480 width)
ATHSingleton *singleton = [ATHSingleton singletons];
int height = singleton.screenHeight;
int width = singleton.screenWidth;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,
height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, 4*width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
(6) Déplacer Votre Vidéo dans la pellicule Comme AVAssetWriter ne peut pas écrire directement sur le rouleau de la caméra, cela déplace la vidéo de "tmp/temp.mov" (ou n'importe quel nom de fichier que vous avez nommé ci-dessus) au rouleau de la caméra.
- (void) saveToCameraRoll:(NSURL *)srcURL
{
NSLog(@"srcURL: %@", srcURL);
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
ALAssetsLibraryWriteVideoCompletionBlock videoWriteCompletionBlock =
^(NSURL *newURL, NSError *error) {
if (error) {
NSLog( @"Error writing image with metadata to Photo Library: %@", error );
} else {
NSLog( @"Wrote image with metadata to Photo Library %@", newURL.absoluteString);
}
};
if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:srcURL])
{
[library writeVideoAtPathToSavedPhotosAlbum:srcURL
completionBlock:videoWriteCompletionBlock];
}
}
la réponse de Zoul ci-dessus donne un bon aperçu de ce que vous allez faire. Nous avons longuement commenté ce code pour que vous puissiez ensuite voir comment il a été fait en utilisant le code de travail.
NOTE: il s'agit d'une solution Swift 2.1 (iOS8+, XCode 7.2) .
la semaine dernière, j'ai entrepris d'écrire le code iOS pour générer une vidéo à partir d'images. J'avais un peu d'expérience avec AVFoundation, mais je n'avais jamais entendu parler D'un CVPixelBuffer. Je suis tombé sur les réponses sur cette page et aussi ici . Il a fallu plusieurs jours pour tout disséquer et tout remettre en ordre dans Swift d'une manière qui avait du sens pour mon cerveau. Ci-dessous est ce que je suis venu avec.
NOTE: Si vous copiez/collez tout le code ci-dessous dans un seul fichier Swift, il devrait être compilé. Il vous suffit de modifier les valeurs loadImages()
et RenderSettings
.
Partie 1: Configuration de choses
ici je regroupe tous les paramètres liés à l'exportation en une seule structure RenderSettings
.
import AVFoundation
import UIKit
import Photos
struct RenderSettings {
var width: CGFloat = 1280
var height: CGFloat = 720
var fps: Int32 = 2 // 2 frames per second
var avCodecKey = AVVideoCodecH264
var videoFilename = "render"
var videoFilenameExt = "mp4"
var size: CGSize {
return CGSize(width: width, height: height)
}
var outputURL: NSURL {
// Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
// Using the CachesDirectory ensures the file won't be included in a backup of the app.
let fileManager = NSFileManager.defaultManager()
if let tmpDirURL = try? fileManager.URLForDirectory(.CachesDirectory, inDomain: .UserDomainMask, appropriateForURL: nil, create: true) {
return tmpDirURL.URLByAppendingPathComponent(videoFilename).URLByAppendingPathExtension(videoFilenameExt)
}
fatalError("URLForDirectory() failed")
}
}
Partie 2: L'ImageAnimator
le ImageAnimator
class connaît vos images et utilise la classe VideoWriter
pour effectuer le rendu. L'idée est de garder le code de contenu vidéo séparé du code AVFoundation de bas niveau. J'ai aussi ajouté saveToLibrary()
ici comme une fonction de classe qui est appelée à la fin de la chaîne pour enregistrer la vidéo à la photothèque.
class ImageAnimator {
// Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
static let kTimescale: Int32 = 600
let settings: RenderSettings
let videoWriter: VideoWriter
var images: [UIImage]!
var frameNum = 0
class func saveToLibrary(videoURL: NSURL) {
PHPhotoLibrary.requestAuthorization { status in
guard status == .Authorized else { return }
PHPhotoLibrary.sharedPhotoLibrary().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideoAtFileURL(videoURL)
}) { success, error in
if !success {
print("Could not save video to photo library:", error)
}
}
}
}
class func removeFileAtURL(fileURL: NSURL) {
do {
try NSFileManager.defaultManager().removeItemAtPath(fileURL.path!)
}
catch _ as NSError {
// Assume file doesn't exist.
}
}
init(renderSettings: RenderSettings) {
settings = renderSettings
videoWriter = VideoWriter(renderSettings: settings)
images = loadImages()
}
func render(completion: ()->Void) {
// The VideoWriter will fail if a file exists at the URL, so clear it out first.
ImageAnimator.removeFileAtURL(settings.outputURL)
videoWriter.start()
videoWriter.render(appendPixelBuffers) {
ImageAnimator.saveToLibrary(self.settings.outputURL)
completion()
}
}
// Replace this logic with your own.
func loadImages() -> [UIImage] {
var images = [UIImage]()
for index in 1...10 {
let filename = "\(index).jpg"
images.append(UIImage(named: filename)!)
}
return images
}
// This is the callback function for VideoWriter.render()
func appendPixelBuffers(writer: VideoWriter) -> Bool {
let frameDuration = CMTimeMake(Int64(ImageAnimator.kTimescale / settings.fps), ImageAnimator.kTimescale)
while !images.isEmpty {
if writer.isReadyForData == false {
// Inform writer we have more buffers to write.
return false
}
let image = images.removeFirst()
let presentationTime = CMTimeMultiply(frameDuration, Int32(frameNum))
let success = videoWriter.addImage(image, withPresentationTime: presentationTime)
if success == false {
fatalError("addImage() failed")
}
frameNum++
}
// Inform writer all buffers have been written.
return true
}
}
Partie 3: L'VideoWriter
la classe VideoWriter
effectue tout le levage lourd AVFoundation. C'est surtout un wrapper autour de AVAssetWriter
et AVAssetWriterInput
. Il contient aussi du code de fantaisie écrit par not me qui sait traduire une image en CVPixelBuffer
.
class VideoWriter {
let renderSettings: RenderSettings
var videoWriter: AVAssetWriter!
var videoWriterInput: AVAssetWriterInput!
var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!
var isReadyForData: Bool {
return videoWriterInput?.readyForMoreMediaData ?? false
}
class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {
var pixelBufferOut: CVPixelBuffer?
let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
if status != kCVReturnSuccess {
fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
}
let pixelBuffer = pixelBufferOut!
CVPixelBufferLockBaseAddress(pixelBuffer, 0)
let data = CVPixelBufferGetBaseAddress(pixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGBitmapContextCreate(data, Int(size.width), Int(size.height),
8, CVPixelBufferGetBytesPerRow(pixelBuffer), rgbColorSpace, CGImageAlphaInfo.PremultipliedFirst.rawValue)
CGContextClearRect(context, CGRectMake(0, 0, size.width, size.height))
let horizontalRatio = size.width / image.size.width
let verticalRatio = size.height / image.size.height
//aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)
let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0
CGContextDrawImage(context, CGRectMake(x, y, newSize.width, newSize.height), image.CGImage)
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
return pixelBuffer
}
init(renderSettings: RenderSettings) {
self.renderSettings = renderSettings
}
func start() {
let avOutputSettings: [String: AnyObject] = [
AVVideoCodecKey: renderSettings.avCodecKey,
AVVideoWidthKey: NSNumber(float: Float(renderSettings.width)),
AVVideoHeightKey: NSNumber(float: Float(renderSettings.height))
]
func createPixelBufferAdaptor() {
let sourcePixelBufferAttributesDictionary = [
kCVPixelBufferPixelFormatTypeKey as String: NSNumber(unsignedInt: kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String: NSNumber(float: Float(renderSettings.width)),
kCVPixelBufferHeightKey as String: NSNumber(float: Float(renderSettings.height))
]
pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
}
func createAssetWriter(outputURL: NSURL) -> AVAssetWriter {
guard let assetWriter = try? AVAssetWriter(URL: outputURL, fileType: AVFileTypeMPEG4) else {
fatalError("AVAssetWriter() failed")
}
guard assetWriter.canApplyOutputSettings(avOutputSettings, forMediaType: AVMediaTypeVideo) else {
fatalError("canApplyOutputSettings() failed")
}
return assetWriter
}
videoWriter = createAssetWriter(renderSettings.outputURL)
videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: avOutputSettings)
if videoWriter.canAddInput(videoWriterInput) {
videoWriter.addInput(videoWriterInput)
}
else {
fatalError("canAddInput() returned false")
}
// The pixel buffer adaptor must be created before we start writing.
createPixelBufferAdaptor()
if videoWriter.startWriting() == false {
fatalError("startWriting() failed")
}
videoWriter.startSessionAtSourceTime(kCMTimeZero)
precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
}
func render(appendPixelBuffers: (VideoWriter)->Bool, completion: ()->Void) {
precondition(videoWriter != nil, "Call start() to initialze the writer")
let queue = dispatch_queue_create("mediaInputQueue", nil)
videoWriterInput.requestMediaDataWhenReadyOnQueue(queue) {
let isFinished = appendPixelBuffers(self)
if isFinished {
self.videoWriterInput.markAsFinished()
self.videoWriter.finishWritingWithCompletionHandler() {
dispatch_async(dispatch_get_main_queue()) {
completion()
}
}
}
else {
// Fall through. The closure will be called again when the writer is ready.
}
}
}
func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {
precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")
let pixelBuffer = VideoWriter.pixelBufferFromImage(image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size)
return pixelBufferAdaptor.appendPixelBuffer(pixelBuffer, withPresentationTime: presentationTime)
}
}
Partie 4: de le réaliser
une fois que tout est en place, voici vos 3 lignes magiques:
let settings = RenderSettings()
let imageAnimator = ImageAnimator(renderSettings: settings)
imageAnimator.render() {
print("yes")
}
j'ai repris les idées principales de Zoul et j'ai incorporé la méthode AVAssetWriterInputPixelBufferAdaptor et j'en ai fait les débuts d'un petit cadre.
N'hésitez pas à le vérifier et à l'améliorer! CEMovieMaker
voici un Swift 2.version X Testée sur iOS 8. Il combine les réponses de @Scott Raposa et @Praxiteles avec le code de @acj contribué à une autre question. Le code de @acj est ici: https://gist.github.com/acj/6ae90aa1ebb8cad6b47b . @TimBull a aussi fourni du code.
comme @Scott Raposa, Je n'avais jamais entendu parler de CVPixelBufferPoolCreatePixelBuffer
et de plusieurs autres fonctions, encore moins compris comment les utiliser.
Ce que vous voyez ci-dessous a été concocté principalement par tâtonnements et par la lecture de Docs Apple. Veuillez utiliser avec prudence et fournir des suggestions s'il y a des erreurs.
Utilisation:
import UIKit
import AVFoundation
import Photos
writeImagesAsMovie(yourImages, videoPath: yourPath, videoSize: yourSize, videoFPS: 30)
Code:
func writeImagesAsMovie(allImages: [UIImage], videoPath: String, videoSize: CGSize, videoFPS: Int32) {
// Create AVAssetWriter to write video
guard let assetWriter = createAssetWriter(videoPath, size: videoSize) else {
print("Error converting images to video: AVAssetWriter not created")
return
}
// If here, AVAssetWriter exists so create AVAssetWriterInputPixelBufferAdaptor
let writerInput = assetWriter.inputs.filter{ "151910920".mediaType == AVMediaTypeVideo }.first!
let sourceBufferAttributes : [String : AnyObject] = [
kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String : videoSize.width,
kCVPixelBufferHeightKey as String : videoSize.height,
]
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: sourceBufferAttributes)
// Start writing session
assetWriter.startWriting()
assetWriter.startSessionAtSourceTime(kCMTimeZero)
if (pixelBufferAdaptor.pixelBufferPool == nil) {
print("Error converting images to video: pixelBufferPool nil after starting session")
return
}
// -- Create queue for <requestMediaDataWhenReadyOnQueue>
let mediaQueue = dispatch_queue_create("mediaInputQueue", nil)
// -- Set video parameters
let frameDuration = CMTimeMake(1, videoFPS)
var frameCount = 0
// -- Add images to video
let numImages = allImages.count
writerInput.requestMediaDataWhenReadyOnQueue(mediaQueue, usingBlock: { () -> Void in
// Append unadded images to video but only while input ready
while (writerInput.readyForMoreMediaData && frameCount < numImages) {
let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
if !self.appendPixelBufferForImageAtURL(allImages[frameCount], pixelBufferAdaptor: pixelBufferAdaptor, presentationTime: presentationTime) {
print("Error converting images to video: AVAssetWriterInputPixelBufferAdapter failed to append pixel buffer")
return
}
frameCount += 1
}
// No more images to add? End video.
if (frameCount >= numImages) {
writerInput.markAsFinished()
assetWriter.finishWritingWithCompletionHandler {
if (assetWriter.error != nil) {
print("Error converting images to video: \(assetWriter.error)")
} else {
self.saveVideoToLibrary(NSURL(fileURLWithPath: videoPath))
print("Converted images to movie @ \(videoPath)")
}
}
}
})
}
func createAssetWriter(path: String, size: CGSize) -> AVAssetWriter? {
// Convert <path> to NSURL object
let pathURL = NSURL(fileURLWithPath: path)
// Return new asset writer or nil
do {
// Create asset writer
let newWriter = try AVAssetWriter(URL: pathURL, fileType: AVFileTypeMPEG4)
// Define settings for video input
let videoSettings: [String : AnyObject] = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : size.width,
AVVideoHeightKey : size.height,
]
// Add video input to writer
let assetWriterVideoInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
newWriter.addInput(assetWriterVideoInput)
// Return writer
print("Created asset writer for \(size.width)x\(size.height) video")
return newWriter
} catch {
print("Error creating asset writer: \(error)")
return nil
}
}
func appendPixelBufferForImageAtURL(image: UIImage, pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor, presentationTime: CMTime) -> Bool {
var appendSucceeded = false
autoreleasepool {
if let pixelBufferPool = pixelBufferAdaptor.pixelBufferPool {
let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
kCFAllocatorDefault,
pixelBufferPool,
pixelBufferPointer
)
if let pixelBuffer = pixelBufferPointer.memory where status == 0 {
fillPixelBufferFromImage(image, pixelBuffer: pixelBuffer)
appendSucceeded = pixelBufferAdaptor.appendPixelBuffer(pixelBuffer, withPresentationTime: presentationTime)
pixelBufferPointer.destroy()
} else {
NSLog("Error: Failed to allocate pixel buffer from pool")
}
pixelBufferPointer.dealloc(1)
}
}
return appendSucceeded
}
func fillPixelBufferFromImage(image: UIImage, pixelBuffer: CVPixelBufferRef) {
CVPixelBufferLockBaseAddress(pixelBuffer, 0)
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
// Create CGBitmapContext
let context = CGBitmapContextCreate(
pixelData,
Int(image.size.width),
Int(image.size.height),
8,
CVPixelBufferGetBytesPerRow(pixelBuffer),
rgbColorSpace,
CGImageAlphaInfo.PremultipliedFirst.rawValue
)
// Draw image into context
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage)
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
}
func saveVideoToLibrary(videoURL: NSURL) {
PHPhotoLibrary.requestAuthorization { status in
// Return if unauthorized
guard status == .Authorized else {
print("Error saving video: unauthorized access")
return
}
// If here, save video to library
PHPhotoLibrary.sharedPhotoLibrary().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideoAtFileURL(videoURL)
}) { success, error in
if !success {
print("Error saving video: \(error)")
}
}
}
}
Voici la version swift3 comment convertir Images array en vidéo
import Foundation
import AVFoundation
import UIKit
typealias CXEMovieMakerCompletion = (URL) -> Void
typealias CXEMovieMakerUIImageExtractor = (AnyObject) -> UIImage?
public class ImagesToVideoUtils: NSObject {
static let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
static let tempPath = paths[0] + "/exprotvideo.mp4"
static let fileURL = URL(fileURLWithPath: tempPath)
// static let tempPath = NSTemporaryDirectory() + "/exprotvideo.mp4"
// static let fileURL = URL(fileURLWithPath: tempPath)
var assetWriter:AVAssetWriter!
var writeInput:AVAssetWriterInput!
var bufferAdapter:AVAssetWriterInputPixelBufferAdaptor!
var videoSettings:[String : Any]!
var frameTime:CMTime!
//var fileURL:URL!
var completionBlock: CXEMovieMakerCompletion?
var movieMakerUIImageExtractor:CXEMovieMakerUIImageExtractor?
public class func videoSettings(codec:String, width:Int, height:Int) -> [String: Any]{
if(Int(width) % 16 != 0){
print("warning: video settings width must be divisible by 16")
}
let videoSettings:[String: Any] = [AVVideoCodecKey: AVVideoCodecJPEG, //AVVideoCodecH264,
AVVideoWidthKey: width,
AVVideoHeightKey: height]
return videoSettings
}
public init(videoSettings: [String: Any]) {
super.init()
if(FileManager.default.fileExists(atPath: ImagesToVideoUtils.tempPath)){
guard (try? FileManager.default.removeItem(atPath: ImagesToVideoUtils.tempPath)) != nil else {
print("remove path failed")
return
}
}
self.assetWriter = try! AVAssetWriter(url: ImagesToVideoUtils.fileURL, fileType: AVFileTypeQuickTimeMovie)
self.videoSettings = videoSettings
self.writeInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
assert(self.assetWriter.canAdd(self.writeInput), "add failed")
self.assetWriter.add(self.writeInput)
let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
self.bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: self.writeInput, sourcePixelBufferAttributes: bufferAttributes)
self.frameTime = CMTimeMake(1, 5)
}
func createMovieFrom(urls: [URL], withCompletion: @escaping CXEMovieMakerCompletion){
self.createMovieFromSource(images: urls as [AnyObject], extractor:{(inputObject:AnyObject) ->UIImage? in
return UIImage(data: try! Data(contentsOf: inputObject as! URL))}, withCompletion: withCompletion)
}
func createMovieFrom(images: [UIImage], withCompletion: @escaping CXEMovieMakerCompletion){
self.createMovieFromSource(images: images, extractor: {(inputObject:AnyObject) -> UIImage? in
return inputObject as? UIImage}, withCompletion: withCompletion)
}
func createMovieFromSource(images: [AnyObject], extractor: @escaping CXEMovieMakerUIImageExtractor, withCompletion: @escaping CXEMovieMakerCompletion){
self.completionBlock = withCompletion
self.assetWriter.startWriting()
self.assetWriter.startSession(atSourceTime: kCMTimeZero)
let mediaInputQueue = DispatchQueue(label: "mediaInputQueue")
var i = 0
let frameNumber = images.count
self.writeInput.requestMediaDataWhenReady(on: mediaInputQueue){
while(true){
if(i >= frameNumber){
break
}
if (self.writeInput.isReadyForMoreMediaData){
var sampleBuffer:CVPixelBuffer?
autoreleasepool{
let img = extractor(images[i])
if img == nil{
i += 1
print("Warning: counld not extract one of the frames")
//continue
}
sampleBuffer = self.newPixelBufferFrom(cgImage: img!.cgImage!)
}
if (sampleBuffer != nil){
if(i == 0){
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: kCMTimeZero)
}else{
let value = i - 1
let lastTime = CMTimeMake(Int64(value), self.frameTime.timescale)
let presentTime = CMTimeAdd(lastTime, self.frameTime)
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: presentTime)
}
i = i + 1
}
}
}
self.writeInput.markAsFinished()
self.assetWriter.finishWriting {
DispatchQueue.main.sync {
self.completionBlock!(ImagesToVideoUtils.fileURL)
}
}
}
}
func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
assert(context != nil, "context is nil")
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: cgImage.width, height: cgImage.height))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
}
- je l'utiliser avec capture d'écran, pour créer une vidéo de capture d'écran, voici l'histoire complète/complete exemple .
vient de traduire @Scott Raposa réponse à swift3 (avec quelques très petits changements):
import AVFoundation
import UIKit
import Photos
struct RenderSettings {
var size : CGSize = .zero
var fps: Int32 = 6 // frames per second
var avCodecKey = AVVideoCodecH264
var videoFilename = "render"
var videoFilenameExt = "mp4"
var outputURL: URL {
// Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
// Using the CachesDirectory ensures the file won't be included in a backup of the app.
let fileManager = FileManager.default
if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
}
fatalError("URLForDirectory() failed")
}
}
class ImageAnimator {
// Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
static let kTimescale: Int32 = 600
let settings: RenderSettings
let videoWriter: VideoWriter
var images: [UIImage]!
var frameNum = 0
class func saveToLibrary(videoURL: URL) {
PHPhotoLibrary.requestAuthorization { status in
guard status == .authorized else { return }
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL)
}) { success, error in
if !success {
print("Could not save video to photo library:", error)
}
}
}
}
class func removeFileAtURL(fileURL: URL) {
do {
try FileManager.default.removeItem(atPath: fileURL.path)
}
catch _ as NSError {
// Assume file doesn't exist.
}
}
init(renderSettings: RenderSettings) {
settings = renderSettings
videoWriter = VideoWriter(renderSettings: settings)
// images = loadImages()
}
func render(completion: (()->Void)?) {
// The VideoWriter will fail if a file exists at the URL, so clear it out first.
ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)
videoWriter.start()
videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
ImageAnimator.saveToLibrary(videoURL: self.settings.outputURL)
completion?()
}
}
// // Replace this logic with your own.
// func loadImages() -> [UIImage] {
// var images = [UIImage]()
// for index in 1...10 {
// let filename = "\(index).jpg"
// images.append(UIImage(named: filename)!)
// }
// return images
// }
// This is the callback function for VideoWriter.render()
func appendPixelBuffers(writer: VideoWriter) -> Bool {
let frameDuration = CMTimeMake(Int64(ImageAnimator.kTimescale / settings.fps), ImageAnimator.kTimescale)
while !images.isEmpty {
if writer.isReadyForData == false {
// Inform writer we have more buffers to write.
return false
}
let image = images.removeFirst()
let presentationTime = CMTimeMultiply(frameDuration, Int32(frameNum))
let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
if success == false {
fatalError("addImage() failed")
}
frameNum += 1
}
// Inform writer all buffers have been written.
return true
}
}
class VideoWriter {
let renderSettings: RenderSettings
var videoWriter: AVAssetWriter!
var videoWriterInput: AVAssetWriterInput!
var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!
var isReadyForData: Bool {
return videoWriterInput?.isReadyForMoreMediaData ?? false
}
class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {
var pixelBufferOut: CVPixelBuffer?
let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
if status != kCVReturnSuccess {
fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
}
let pixelBuffer = pixelBufferOut!
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
let data = CVPixelBufferGetBaseAddress(pixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
context!.clear(CGRect(x:0,y: 0,width: size.width,height: size.height))
let horizontalRatio = size.width / image.size.width
let verticalRatio = size.height / image.size.height
//aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)
let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0
context?.draw(image.cgImage!, in: CGRect(x:x,y: y, width: newSize.width, height: newSize.height))
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
}
init(renderSettings: RenderSettings) {
self.renderSettings = renderSettings
}
func start() {
let avOutputSettings: [String: Any] = [
AVVideoCodecKey: renderSettings.avCodecKey,
AVVideoWidthKey: NSNumber(value: Float(renderSettings.size.width)),
AVVideoHeightKey: NSNumber(value: Float(renderSettings.size.height))
]
func createPixelBufferAdaptor() {
let sourcePixelBufferAttributesDictionary = [
kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.size.width)),
kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.size.height))
]
pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
}
func createAssetWriter(outputURL: URL) -> AVAssetWriter {
guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileTypeMPEG4) else {
fatalError("AVAssetWriter() failed")
}
guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaTypeVideo) else {
fatalError("canApplyOutputSettings() failed")
}
return assetWriter
}
videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: avOutputSettings)
if videoWriter.canAdd(videoWriterInput) {
videoWriter.add(videoWriterInput)
}
else {
fatalError("canAddInput() returned false")
}
// The pixel buffer adaptor must be created before we start writing.
createPixelBufferAdaptor()
if videoWriter.startWriting() == false {
fatalError("startWriting() failed")
}
videoWriter.startSession(atSourceTime: kCMTimeZero)
precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
}
func render(appendPixelBuffers: ((VideoWriter)->Bool)?, completion: (()->Void)?) {
precondition(videoWriter != nil, "Call start() to initialze the writer")
let queue = DispatchQueue(label: "mediaInputQueue")
videoWriterInput.requestMediaDataWhenReady(on: queue) {
let isFinished = appendPixelBuffers?(self) ?? false
if isFinished {
self.videoWriterInput.markAsFinished()
self.videoWriter.finishWriting() {
DispatchQueue.main.async {
completion?()
}
}
}
else {
// Fall through. The closure will be called again when the writer is ready.
}
}
}
func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {
precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")
let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size)
return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
}
}
version Swift 4 pour macOS (pas iOS), basée sur @Mikita Manko.
import AVFoundation
import AppKit
class VidWriter {
var assetWriter: AVAssetWriter
var writerInput: AVAssetWriterInput
var bufferAdapter: AVAssetWriterInputPixelBufferAdaptor!
var videoSettings: [String : Any]
var frameTime: CMTime!
var fileUrl: URL!
init(url: URL, vidSettings: [String : Any]) {
self.assetWriter = try! AVAssetWriter(url: url, fileType: AVFileType.mov)
self.fileUrl = url
self.videoSettings = vidSettings
self.writerInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: self.videoSettings)
assert(self.assetWriter.canAdd(self.writerInput), "Writer cannot add input")
self.assetWriter.add(self.writerInput)
let bufferAttributes = [kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32ARGB)]
self.bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: self.writerInput, sourcePixelBufferAttributes: bufferAttributes)
self.frameTime = CMTimeMake(1, 5) // Default value, use 'applyTimeWith(duration:)' to apply specific time.
}
static func videoSettings(codec: String = AVVideoCodecJPEG, width: Int, height: Int) -> [String : Any] {
// AVVideoCodecJPEG also works, but result in a much bigger file.
return [
AVVideoCodecKey : AVVideoCodecH264, //AVVideoCodecJPEG,
AVVideoWidthKey : width,
AVVideoHeightKey : height
]
}
/**
Update the movie time with the number of images and the duration per image.
- Parameter duration: The duration per frame (image)
- Parameter frameNumber: The number of frames (images)
*/
func applyTimeWith(duration: Float, frameNumber: Int) {
let scale = Float(frameNumber) / (Float(frameNumber) * duration)
self.frameTime = CMTimeMake(1, Int32(scale))
}
func createMovieFrom(images: [NSImage], completion: @escaping (URL) -> Void) {
self.assetWriter.startWriting()
self.assetWriter.startSession(atSourceTime: kCMTimeZero)
let mediaInputQueue = DispatchQueue(label: "MediaInputQueu")
var i = 0
let frameNumber = images.count
self.writerInput.requestMediaDataWhenReady(on: mediaInputQueue) {
while i < frameNumber {
if self.writerInput.isReadyForMoreMediaData {
var sampleBuffer: CVPixelBuffer?
autoreleasepool(invoking: {
let img = images[i]
var imgRect = CGRect(x: 0, y: 0, width: img.size.width, height: img.size.height)
sampleBuffer = self.newPixelBufferFrom(cgImage: img.cgImage(forProposedRect: &imgRect, context: nil, hints: nil)!)
}) // End of autoreleasepool
if sampleBuffer != nil {
if i == 0 {
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: kCMTimeZero)
}
else {
let value = i - 1
let lastTime = CMTimeMake(Int64(value), self.frameTime.timescale)
let presentTime = CMTimeAdd(lastTime, self.frameTime)
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: presentTime)
}
i += 1
}
} // End of isReadyForMoreMediaData
} // End of while loop
self.writerInput.markAsFinished()
self.assetWriter.finishWriting {
DispatchQueue.main.async {
// At this point, the given URL will already have the ready file.
// You can just use the URL passed in the init.
completion(self.fileUrl)
}
}
}
}
func newPixelBufferFrom(cgImage: CGImage) -> CVPixelBuffer? {
let options: [String : Any] = [kCVPixelBufferCGImageCompatibilityKey as String : true, kCVPixelBufferCGBitmapContextCompatibilityKey as String : true]
var pxbuffer: CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxData = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxData, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
assert(context != nil, "context is nil")
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: cgImage.width, height: cgImage.height))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
}
Utilisation:
let settings = VidWriter.videoSettings(width: cgImg.width, height: cgImg.height)
// Note: There should be no file at the targetUrl or nothing will be written.
self.vidWriter = VidWriter(url: targetUrl!, vidSettings: settings)
self.vidWriter.applyTimeWith(duration: durationPerFrame, frameNumber: images.count)
self.vidWriter.createMovieFrom(images: images, completion: { (finalUrl) in
print("Completed")
})
Eh bien c'est un peu difficile à mettre en œuvre dans L'objectif pur-C....Si vous développez pour les appareils jailbroken , une bonne idée est d'utiliser l'outil en ligne de commande ffmpeg à partir de l'intérieur de votre application. il est assez facile de créer un film à partir d'images avec une commande comme:
ffmpeg -r 10 -b 1800 -i %03d.jpg test1800.mp4
notez que les images doivent être nommées séquentiellement , et être également placées dans le même répertoire. Pour plus d'informations, jetez un oeil à: http://electron.mit.edu / ~gsteele / ffmpeg/
utilisez AVAssetWriter pour écrire des images comme film. J'ai déjà répondu ici: - https://stackoverflow.com/a/19166876/1582217