深刻な問題があります:NSArray
にいくつかのUIImage
オブジェクトがあります。ここでやりたいことは、それらのUIImages
からムービーを作成することです。しかし、その方法はわかりません。
誰かが私を助けてくれるか、私が望むようなことをするコードスニペットを送ってくれることを願っています。
編集:今後の参考のために-ソリューションを適用した後、ビデオが歪んで見える場合は、キャプチャする画像/領域の幅が複数であることを確認してくださいここに多くの苦労の後に発見された:
IImagesのムービーが歪むのはなぜですか?
完全なソリューションを次に示します(幅が16の倍数であることを確認してください)
http://codethink.no-ip.org/wordpress/archives/67
AVAssetWriter およびその他の AVFoundationフレームワーク をご覧ください。ライターには、タイプ AVAssetWriterInput の入力があります。この入力には、ビデオストリームに個々のフレームを追加できる appendSampleBuffer: というメソッドがあります。基本的に次のことを行う必要があります。
1)ライターの配線:
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:640], AVVideoWidthKey,
[NSNumber numberWithInt:480], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain]; //retain should be removed if ARC
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
2)セッションを開始します:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:…] //use kCMTimeZero if unsure
3)サンプルを書く:
// Or you can use AVAssetWriterInputPixelBufferAdaptor.
// That lets you feed the writer input data from a CVPixelBuffer
// that’s quite easy to create from a CGImage.
[writerInput appendSampleBuffer:sampleBuffer];
4)セッションを終了します:
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:…]; //optional can call finishWriting without specifying endTime
[videoWriter finishWriting]; //deprecated in ios6
/*
[videoWriter finishWritingWithCompletionHandler:...]; //ios 6.0+
*/
まだ多くの空白を埋める必要がありますが、本当に難しい残りの部分は、CGImage
からピクセルバッファを取得することだけだと思います。
- (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
frameSize.height, 8, 4*frameSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, frameTransform);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
frameSize
はターゲットフレームサイズを説明するCGSize
であり、frameTransform
はCGAffineTransform
であり、画像をフレームに描画するときに画像を変換できます。
Objective-CのiOS8の最新の動作コードです
XcodeとiOS8の最新バージョンで動作させるために、上記の@Zoulの回答にさまざまな調整を加える必要がありました。 UIImageの配列を取得し、それらを.movファイルにして、一時ディレクトリに保存し、カメラロールに移動する完全な作業コードを次に示します。これを機能させるために、複数の異なる投稿からコードを組み立てました。コメントでコードを機能させるために解決しなければならなかったトラップを強調しました。
(1)UIImageのコレクションを作成
[self saveMovieToLibrary]
- (IBAction)saveMovieToLibrary
{
// You just need the height and width of the video here
// For us, our input and output video was 640 height x 480 width
// which is what we get from the iOS front camera
ATHSingleton *singleton = [ATHSingleton singletons];
int height = singleton.screenHeight;
int width = singleton.screenWidth;
// You can save a .mov or a .mp4 file
//NSString *fileNameOut = @"temp.mp4";
NSString *fileNameOut = @"temp.mov";
// We chose to save in the tmp/ directory on the device initially
NSString *directoryOut = @"tmp/";
NSString *outFile = [NSString stringWithFormat:@"%@%@",directoryOut,fileNameOut];
NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:outFile]];
NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), fileNameOut]];
// WARNING: AVAssetWriter does not overwrite files for us, so remove the destination file if it already exists
NSFileManager *fileManager = [NSFileManager defaultManager];
[fileManager removeItemAtPath:[videoTempURL path] error:NULL];
// Create your own array of UIImages
NSMutableArray *images = [NSMutableArray array];
for (int i=0; i<singleton.numberOfScreenshots; i++)
{
// This was our routine that returned a UIImage. Just use your own.
UIImage *image =[self uiimageFromCopyOfPixelBuffersUsingIndex:i];
// We used a routine to write text onto every image
// so we could validate the images were actually being written when testing. This was it below.
image = [self writeToImage:image Text:[NSString stringWithFormat:@"%i",i ]];
[images addObject:image];
}
// If you just want to manually add a few images - here is code you can uncomment
// NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:@"Documents/movie.mp4"]];
// NSArray *images = [[NSArray alloc] initWithObjects:
// [UIImage imageNamed:@"add_ar.png"],
// [UIImage imageNamed:@"add_ja.png"],
// [UIImage imageNamed:@"add_ru.png"],
// [UIImage imageNamed:@"add_ru.png"],
// [UIImage imageNamed:@"add_ar.png"],
// [UIImage imageNamed:@"add_ja.png"],
// [UIImage imageNamed:@"add_ru.png"],
// [UIImage imageNamed:@"add_ar.png"],
// [UIImage imageNamed:@"add_en.png"], nil];
[self writeImageAsMovie:images toPath:path size:CGSizeMake(height, width)];
}
これは、AssetWriterを作成し、書き込み用に画像を追加するメインメソッドです。
(2)AVAssetWriterの接続
-(void)writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size
{
NSError *error = nil;
// FIRST, start up an AVAssetWriter instance to write your video
// Give it a destination path (for us: tmp/temp.mov)
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
(3)書き込みセッションを開始(注:メソッドは上から継続しています)
//Start a SESSION of writing.
// After you start a session, you will keep adding image frames
// until you are complete - then you will tell it you are done.
[videoWriter startWriting];
// This starts your video at time = 0
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
// This was just our utility class to get screen sizes etc.
ATHSingleton *singleton = [ATHSingleton singletons];
int i = 0;
while (1)
{
// Check if the writer is ready for more data, if not, just wait
if(writerInput.readyForMoreMediaData){
CMTime frameTime = CMTimeMake(150, 600);
// CMTime = Value and Timescale.
// Timescale = the number of tics per second you want
// Value is the number of tics
// For us - each frame we add will be 1/4th of a second
// Apple recommend 600 tics per second for video because it is a
// multiple of the standard video rates 24, 30, 60 fps etc.
CMTime lastTime=CMTimeMake(i*150, 600);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
if (i == 0) {presentTime = CMTimeMake(0, 600);}
// This ensures the first frame starts at 0.
if (i >= [array count])
{
buffer = NULL;
}
else
{
// This command grabs the next UIImage and converts it to a CGImage
buffer = [self pixelBufferFromCGImage:[[array objectAtIndex:i] CGImage]];
}
if (buffer)
{
// Give the CGImage to the AVAssetWriter to add to your video
[adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
i++;
}
else
{
(4)セッションの終了(注:上記の方法から継続)
//Finish the session:
// This is important to be done exactly in this order
[writerInput markAsFinished];
// WARNING: finishWriting in the solution above is deprecated.
// You now need to give a completion handler.
[videoWriter finishWritingWithCompletionHandler:^{
NSLog(@"Finished writing...checking completion status...");
if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted)
{
NSLog(@"Video writing succeeded.");
// Move video to camera roll
// NOTE: You cannot write directly to the camera roll.
// You must first write to an iOS directory then move it!
NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@", path]];
[self saveToCameraRoll:videoTempURL];
} else
{
NSLog(@"Video writing failed: %@", videoWriter.error);
}
}]; // end videoWriter finishWriting Block
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
NSLog (@"Done");
break;
}
}
}
}
(5)UIImageをCVPixelBufferRefに変換
このメソッドは、AssetWriterに必要なCVピクセルバッファー参照を提供します。これは、UIImage(上記)から取得するCGImageRefから取得されます。
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
// This again was just our utility class for the height & width of the
// incoming video (640 height x 480 width)
ATHSingleton *singleton = [ATHSingleton singletons];
int height = singleton.screenHeight;
int width = singleton.screenWidth;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,
height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, 4*width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
(6)ビデオをカメラロールに移動 AVAssetWriterはカメラロールに直接書き込むことができないため、これによりビデオが「tmp/temp.mov」(または上記で指定したファイル名)からカメラロール。
- (void) saveToCameraRoll:(NSURL *)srcURL
{
NSLog(@"srcURL: %@", srcURL);
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
ALAssetsLibraryWriteVideoCompletionBlock videoWriteCompletionBlock =
^(NSURL *newURL, NSError *error) {
if (error) {
NSLog( @"Error writing image with metadata to Photo Library: %@", error );
} else {
NSLog( @"Wrote image with metadata to Photo Library %@", newURL.absoluteString);
}
};
if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:srcURL])
{
[library writeVideoAtPathToSavedPhotosAlbum:srcURL
completionBlock:videoWriteCompletionBlock];
}
}
上記のZoulの答えは、あなたがやろうとしていることの素晴らしい概要を示しています。このコードを広範にコメントしたため、実際のコードを使用してどのように実行されたかを確認できます。
注:これはSwift 2.1ソリューション(iOS8 +、XCode 7.2)です。
先週、iOSコードを記述して、画像からビデオを生成することにしました。 AVFoundationの経験は少しありましたが、CVPixelBufferのことすら聞いたことがありませんでした。このページで答えを見つけました こちら 。すべてを解剖し、Swiftですべてを元に戻すには、私の脳にとって意味のある方法で数日かかりました。以下は私が思いついたものです。
注:以下のすべてのコードを単一のSwiftファイルにコピー/貼り付けすると、コンパイルされるはずです。 loadImages()
とRenderSettings
の値を微調整するだけです。
ここでは、エクスポート関連のすべての設定を単一のRenderSettings
構造体にグループ化します。
import AVFoundation
import UIKit
import Photos
struct RenderSettings {
var width: CGFloat = 1280
var height: CGFloat = 720
var fps: Int32 = 2 // 2 frames per second
var avCodecKey = AVVideoCodecH264
var videoFilename = "render"
var videoFilenameExt = "mp4"
var size: CGSize {
return CGSize(width: width, height: height)
}
var outputURL: NSURL {
// Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
// Using the CachesDirectory ensures the file won't be included in a backup of the app.
let fileManager = NSFileManager.defaultManager()
if let tmpDirURL = try? fileManager.URLForDirectory(.CachesDirectory, inDomain: .UserDomainMask, appropriateForURL: nil, create: true) {
return tmpDirURL.URLByAppendingPathComponent(videoFilename).URLByAppendingPathExtension(videoFilenameExt)
}
fatalError("URLForDirectory() failed")
}
}
ImageAnimator
クラスは画像を認識し、VideoWriter
クラスを使用してレンダリングを実行します。アイデアは、ビデオコンテンツコードを低レベルのAVFoundationコードから分離することです。また、ここでsaveToLibrary()
をクラス関数として追加しました。この関数は、チェーンの終わりに呼び出され、ビデオをフォトライブラリに保存します。
class ImageAnimator {
// Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
static let kTimescale: Int32 = 600
let settings: RenderSettings
let videoWriter: VideoWriter
var images: [UIImage]!
var frameNum = 0
class func saveToLibrary(videoURL: NSURL) {
PHPhotoLibrary.requestAuthorization { status in
guard status == .Authorized else { return }
PHPhotoLibrary.sharedPhotoLibrary().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideoAtFileURL(videoURL)
}) { success, error in
if !success {
print("Could not save video to photo library:", error)
}
}
}
}
class func removeFileAtURL(fileURL: NSURL) {
do {
try NSFileManager.defaultManager().removeItemAtPath(fileURL.path!)
}
catch _ as NSError {
// Assume file doesn't exist.
}
}
init(renderSettings: RenderSettings) {
settings = renderSettings
videoWriter = VideoWriter(renderSettings: settings)
images = loadImages()
}
func render(completion: ()->Void) {
// The VideoWriter will fail if a file exists at the URL, so clear it out first.
ImageAnimator.removeFileAtURL(settings.outputURL)
videoWriter.start()
videoWriter.render(appendPixelBuffers) {
ImageAnimator.saveToLibrary(self.settings.outputURL)
completion()
}
}
// Replace this logic with your own.
func loadImages() -> [UIImage] {
var images = [UIImage]()
for index in 1...10 {
let filename = "\(index).jpg"
images.append(UIImage(named: filename)!)
}
return images
}
// This is the callback function for VideoWriter.render()
func appendPixelBuffers(writer: VideoWriter) -> Bool {
let frameDuration = CMTimeMake(Int64(ImageAnimator.kTimescale / settings.fps), ImageAnimator.kTimescale)
while !images.isEmpty {
if writer.isReadyForData == false {
// Inform writer we have more buffers to write.
return false
}
let image = images.removeFirst()
let presentationTime = CMTimeMultiply(frameDuration, Int32(frameNum))
let success = videoWriter.addImage(image, withPresentationTime: presentationTime)
if success == false {
fatalError("addImage() failed")
}
frameNum++
}
// Inform writer all buffers have been written.
return true
}
}
VideoWriter
クラスは、AVFoundationのすべての面倒な作業を行います。それはほとんどAVAssetWriter
とAVAssetWriterInput
のラッパーです。また、イメージをCVPixelBuffer
に変換する方法を知っている私ではない人によって書かれた派手なコードも含まれています。
class VideoWriter {
let renderSettings: RenderSettings
var videoWriter: AVAssetWriter!
var videoWriterInput: AVAssetWriterInput!
var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!
var isReadyForData: Bool {
return videoWriterInput?.readyForMoreMediaData ?? false
}
class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {
var pixelBufferOut: CVPixelBuffer?
let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
if status != kCVReturnSuccess {
fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
}
let pixelBuffer = pixelBufferOut!
CVPixelBufferLockBaseAddress(pixelBuffer, 0)
let data = CVPixelBufferGetBaseAddress(pixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGBitmapContextCreate(data, Int(size.width), Int(size.height),
8, CVPixelBufferGetBytesPerRow(pixelBuffer), rgbColorSpace, CGImageAlphaInfo.PremultipliedFirst.rawValue)
CGContextClearRect(context, CGRectMake(0, 0, size.width, size.height))
let horizontalRatio = size.width / image.size.width
let verticalRatio = size.height / image.size.height
//aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)
let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0
CGContextDrawImage(context, CGRectMake(x, y, newSize.width, newSize.height), image.CGImage)
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
return pixelBuffer
}
init(renderSettings: RenderSettings) {
self.renderSettings = renderSettings
}
func start() {
let avOutputSettings: [String: AnyObject] = [
AVVideoCodecKey: renderSettings.avCodecKey,
AVVideoWidthKey: NSNumber(float: Float(renderSettings.width)),
AVVideoHeightKey: NSNumber(float: Float(renderSettings.height))
]
func createPixelBufferAdaptor() {
let sourcePixelBufferAttributesDictionary = [
kCVPixelBufferPixelFormatTypeKey as String: NSNumber(unsignedInt: kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String: NSNumber(float: Float(renderSettings.width)),
kCVPixelBufferHeightKey as String: NSNumber(float: Float(renderSettings.height))
]
pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
}
func createAssetWriter(outputURL: NSURL) -> AVAssetWriter {
guard let assetWriter = try? AVAssetWriter(URL: outputURL, fileType: AVFileTypeMPEG4) else {
fatalError("AVAssetWriter() failed")
}
guard assetWriter.canApplyOutputSettings(avOutputSettings, forMediaType: AVMediaTypeVideo) else {
fatalError("canApplyOutputSettings() failed")
}
return assetWriter
}
videoWriter = createAssetWriter(renderSettings.outputURL)
videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: avOutputSettings)
if videoWriter.canAddInput(videoWriterInput) {
videoWriter.addInput(videoWriterInput)
}
else {
fatalError("canAddInput() returned false")
}
// The pixel buffer adaptor must be created before we start writing.
createPixelBufferAdaptor()
if videoWriter.startWriting() == false {
fatalError("startWriting() failed")
}
videoWriter.startSessionAtSourceTime(kCMTimeZero)
precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
}
func render(appendPixelBuffers: (VideoWriter)->Bool, completion: ()->Void) {
precondition(videoWriter != nil, "Call start() to initialze the writer")
let queue = dispatch_queue_create("mediaInputQueue", nil)
videoWriterInput.requestMediaDataWhenReadyOnQueue(queue) {
let isFinished = appendPixelBuffers(self)
if isFinished {
self.videoWriterInput.markAsFinished()
self.videoWriter.finishWritingWithCompletionHandler() {
dispatch_async(dispatch_get_main_queue()) {
completion()
}
}
}
else {
// Fall through. The closure will be called again when the writer is ready.
}
}
}
func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {
precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")
let pixelBuffer = VideoWriter.pixelBufferFromImage(image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size)
return pixelBufferAdaptor.appendPixelBuffer(pixelBuffer, withPresentationTime: presentationTime)
}
}
すべてが整ったら、次の3つの魔法のラインがあります。
let settings = RenderSettings()
let imageAnimator = ImageAnimator(renderSettings: settings)
imageAnimator.render() {
print("yes")
}
Zoulの主なアイデアを取り入れて、AVAssetWriterInputPixelBufferAdaptorメソッドを組み込み、そこから小さなフレームワークの始まりを作りました。
自由にチェックして改善してください! CEMovieMaker
これはiOS 8でテストされたSwift 2.xバージョンです。@ Scott Raposaと@Praxitelesからの回答と、別の質問に貢献した@acjからのコードを組み合わせています。 @acjのコードは次のとおりです。 https://Gist.github.com/acj/6ae90aa1ebb8cad6b47b 。 @TimBullもコードを提供しました。
@Scott Raposaのように、CVPixelBufferPoolCreatePixelBuffer
や他のいくつかの機能を聞いたことがありませんでした。
以下に表示されているものは、主に試行錯誤とAppleのドキュメントの読み上げによってまとめられています。慎重に使用し、間違いがある場合は提案を提供してください。
使用法:
import UIKit
import AVFoundation
import Photos
writeImagesAsMovie(yourImages, videoPath: yourPath, videoSize: yourSize, videoFPS: 30)
コード:
func writeImagesAsMovie(allImages: [UIImage], videoPath: String, videoSize: CGSize, videoFPS: Int32) {
// Create AVAssetWriter to write video
guard let assetWriter = createAssetWriter(videoPath, size: videoSize) else {
print("Error converting images to video: AVAssetWriter not created")
return
}
// If here, AVAssetWriter exists so create AVAssetWriterInputPixelBufferAdaptor
let writerInput = assetWriter.inputs.filter{ $0.mediaType == AVMediaTypeVideo }.first!
let sourceBufferAttributes : [String : AnyObject] = [
kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String : videoSize.width,
kCVPixelBufferHeightKey as String : videoSize.height,
]
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: sourceBufferAttributes)
// Start writing session
assetWriter.startWriting()
assetWriter.startSessionAtSourceTime(kCMTimeZero)
if (pixelBufferAdaptor.pixelBufferPool == nil) {
print("Error converting images to video: pixelBufferPool nil after starting session")
return
}
// -- Create queue for <requestMediaDataWhenReadyOnQueue>
let mediaQueue = dispatch_queue_create("mediaInputQueue", nil)
// -- Set video parameters
let frameDuration = CMTimeMake(1, videoFPS)
var frameCount = 0
// -- Add images to video
let numImages = allImages.count
writerInput.requestMediaDataWhenReadyOnQueue(mediaQueue, usingBlock: { () -> Void in
// Append unadded images to video but only while input ready
while (writerInput.readyForMoreMediaData && frameCount < numImages) {
let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
if !self.appendPixelBufferForImageAtURL(allImages[frameCount], pixelBufferAdaptor: pixelBufferAdaptor, presentationTime: presentationTime) {
print("Error converting images to video: AVAssetWriterInputPixelBufferAdapter failed to append pixel buffer")
return
}
frameCount += 1
}
// No more images to add? End video.
if (frameCount >= numImages) {
writerInput.markAsFinished()
assetWriter.finishWritingWithCompletionHandler {
if (assetWriter.error != nil) {
print("Error converting images to video: \(assetWriter.error)")
} else {
self.saveVideoToLibrary(NSURL(fileURLWithPath: videoPath))
print("Converted images to movie @ \(videoPath)")
}
}
}
})
}
func createAssetWriter(path: String, size: CGSize) -> AVAssetWriter? {
// Convert <path> to NSURL object
let pathURL = NSURL(fileURLWithPath: path)
// Return new asset writer or nil
do {
// Create asset writer
let newWriter = try AVAssetWriter(URL: pathURL, fileType: AVFileTypeMPEG4)
// Define settings for video input
let videoSettings: [String : AnyObject] = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : size.width,
AVVideoHeightKey : size.height,
]
// Add video input to writer
let assetWriterVideoInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
newWriter.addInput(assetWriterVideoInput)
// Return writer
print("Created asset writer for \(size.width)x\(size.height) video")
return newWriter
} catch {
print("Error creating asset writer: \(error)")
return nil
}
}
func appendPixelBufferForImageAtURL(image: UIImage, pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor, presentationTime: CMTime) -> Bool {
var appendSucceeded = false
autoreleasepool {
if let pixelBufferPool = pixelBufferAdaptor.pixelBufferPool {
let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
kCFAllocatorDefault,
pixelBufferPool,
pixelBufferPointer
)
if let pixelBuffer = pixelBufferPointer.memory where status == 0 {
fillPixelBufferFromImage(image, pixelBuffer: pixelBuffer)
appendSucceeded = pixelBufferAdaptor.appendPixelBuffer(pixelBuffer, withPresentationTime: presentationTime)
pixelBufferPointer.destroy()
} else {
NSLog("Error: Failed to allocate pixel buffer from pool")
}
pixelBufferPointer.dealloc(1)
}
}
return appendSucceeded
}
func fillPixelBufferFromImage(image: UIImage, pixelBuffer: CVPixelBufferRef) {
CVPixelBufferLockBaseAddress(pixelBuffer, 0)
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
// Create CGBitmapContext
let context = CGBitmapContextCreate(
pixelData,
Int(image.size.width),
Int(image.size.height),
8,
CVPixelBufferGetBytesPerRow(pixelBuffer),
rgbColorSpace,
CGImageAlphaInfo.PremultipliedFirst.rawValue
)
// Draw image into context
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage)
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
}
func saveVideoToLibrary(videoURL: NSURL) {
PHPhotoLibrary.requestAuthorization { status in
// Return if unauthorized
guard status == .Authorized else {
print("Error saving video: unauthorized access")
return
}
// If here, save video to library
PHPhotoLibrary.sharedPhotoLibrary().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideoAtFileURL(videoURL)
}) { success, error in
if !success {
print("Error saving video: \(error)")
}
}
}
}
以下は、画像配列をビデオに変換するSwift3バージョンです。
import Foundation
import AVFoundation
import UIKit
typealias CXEMovieMakerCompletion = (URL) -> Void
typealias CXEMovieMakerUIImageExtractor = (AnyObject) -> UIImage?
public class ImagesToVideoUtils: NSObject {
static let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
static let tempPath = paths[0] + "/exprotvideo.mp4"
static let fileURL = URL(fileURLWithPath: tempPath)
// static let tempPath = NSTemporaryDirectory() + "/exprotvideo.mp4"
// static let fileURL = URL(fileURLWithPath: tempPath)
var assetWriter:AVAssetWriter!
var writeInput:AVAssetWriterInput!
var bufferAdapter:AVAssetWriterInputPixelBufferAdaptor!
var videoSettings:[String : Any]!
var frameTime:CMTime!
//var fileURL:URL!
var completionBlock: CXEMovieMakerCompletion?
var movieMakerUIImageExtractor:CXEMovieMakerUIImageExtractor?
public class func videoSettings(codec:String, width:Int, height:Int) -> [String: Any]{
if(Int(width) % 16 != 0){
print("warning: video settings width must be divisible by 16")
}
let videoSettings:[String: Any] = [AVVideoCodecKey: AVVideoCodecJPEG, //AVVideoCodecH264,
AVVideoWidthKey: width,
AVVideoHeightKey: height]
return videoSettings
}
public init(videoSettings: [String: Any]) {
super.init()
if(FileManager.default.fileExists(atPath: ImagesToVideoUtils.tempPath)){
guard (try? FileManager.default.removeItem(atPath: ImagesToVideoUtils.tempPath)) != nil else {
print("remove path failed")
return
}
}
self.assetWriter = try! AVAssetWriter(url: ImagesToVideoUtils.fileURL, fileType: AVFileTypeQuickTimeMovie)
self.videoSettings = videoSettings
self.writeInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
assert(self.assetWriter.canAdd(self.writeInput), "add failed")
self.assetWriter.add(self.writeInput)
let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
self.bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: self.writeInput, sourcePixelBufferAttributes: bufferAttributes)
self.frameTime = CMTimeMake(1, 5)
}
func createMovieFrom(urls: [URL], withCompletion: @escaping CXEMovieMakerCompletion){
self.createMovieFromSource(images: urls as [AnyObject], extractor:{(inputObject:AnyObject) ->UIImage? in
return UIImage(data: try! Data(contentsOf: inputObject as! URL))}, withCompletion: withCompletion)
}
func createMovieFrom(images: [UIImage], withCompletion: @escaping CXEMovieMakerCompletion){
self.createMovieFromSource(images: images, extractor: {(inputObject:AnyObject) -> UIImage? in
return inputObject as? UIImage}, withCompletion: withCompletion)
}
func createMovieFromSource(images: [AnyObject], extractor: @escaping CXEMovieMakerUIImageExtractor, withCompletion: @escaping CXEMovieMakerCompletion){
self.completionBlock = withCompletion
self.assetWriter.startWriting()
self.assetWriter.startSession(atSourceTime: kCMTimeZero)
let mediaInputQueue = DispatchQueue(label: "mediaInputQueue")
var i = 0
let frameNumber = images.count
self.writeInput.requestMediaDataWhenReady(on: mediaInputQueue){
while(true){
if(i >= frameNumber){
break
}
if (self.writeInput.isReadyForMoreMediaData){
var sampleBuffer:CVPixelBuffer?
autoreleasepool{
let img = extractor(images[i])
if img == nil{
i += 1
print("Warning: counld not extract one of the frames")
//continue
}
sampleBuffer = self.newPixelBufferFrom(cgImage: img!.cgImage!)
}
if (sampleBuffer != nil){
if(i == 0){
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: kCMTimeZero)
}else{
let value = i - 1
let lastTime = CMTimeMake(Int64(value), self.frameTime.timescale)
let presentTime = CMTimeAdd(lastTime, self.frameTime)
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: presentTime)
}
i = i + 1
}
}
}
self.writeInput.markAsFinished()
self.assetWriter.finishWriting {
DispatchQueue.main.sync {
self.completionBlock!(ImagesToVideoUtils.fileURL)
}
}
}
}
func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
assert(context != nil, "context is nil")
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: cgImage.width, height: cgImage.height))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
}
基本的にスクリーンキャプチャのビデオを作成するために、スクリーンキャプチャと一緒に使用します。これは 全文/完全な例 です。
@Scott Raposaの回答をSwift3に翻訳したばかりです(わずかな変更を加えています)。
import AVFoundation
import UIKit
import Photos
struct RenderSettings {
var size : CGSize = .zero
var fps: Int32 = 6 // frames per second
var avCodecKey = AVVideoCodecH264
var videoFilename = "render"
var videoFilenameExt = "mp4"
var outputURL: URL {
// Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
// Using the CachesDirectory ensures the file won't be included in a backup of the app.
let fileManager = FileManager.default
if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
}
fatalError("URLForDirectory() failed")
}
}
class ImageAnimator {
// Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
static let kTimescale: Int32 = 600
let settings: RenderSettings
let videoWriter: VideoWriter
var images: [UIImage]!
var frameNum = 0
class func saveToLibrary(videoURL: URL) {
PHPhotoLibrary.requestAuthorization { status in
guard status == .authorized else { return }
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL)
}) { success, error in
if !success {
print("Could not save video to photo library:", error)
}
}
}
}
class func removeFileAtURL(fileURL: URL) {
do {
try FileManager.default.removeItem(atPath: fileURL.path)
}
catch _ as NSError {
// Assume file doesn't exist.
}
}
init(renderSettings: RenderSettings) {
settings = renderSettings
videoWriter = VideoWriter(renderSettings: settings)
// images = loadImages()
}
func render(completion: (()->Void)?) {
// The VideoWriter will fail if a file exists at the URL, so clear it out first.
ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)
videoWriter.start()
videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
ImageAnimator.saveToLibrary(videoURL: self.settings.outputURL)
completion?()
}
}
// // Replace this logic with your own.
// func loadImages() -> [UIImage] {
// var images = [UIImage]()
// for index in 1...10 {
// let filename = "\(index).jpg"
// images.append(UIImage(named: filename)!)
// }
// return images
// }
// This is the callback function for VideoWriter.render()
func appendPixelBuffers(writer: VideoWriter) -> Bool {
let frameDuration = CMTimeMake(Int64(ImageAnimator.kTimescale / settings.fps), ImageAnimator.kTimescale)
while !images.isEmpty {
if writer.isReadyForData == false {
// Inform writer we have more buffers to write.
return false
}
let image = images.removeFirst()
let presentationTime = CMTimeMultiply(frameDuration, Int32(frameNum))
let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
if success == false {
fatalError("addImage() failed")
}
frameNum += 1
}
// Inform writer all buffers have been written.
return true
}
}
class VideoWriter {
let renderSettings: RenderSettings
var videoWriter: AVAssetWriter!
var videoWriterInput: AVAssetWriterInput!
var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!
var isReadyForData: Bool {
return videoWriterInput?.isReadyForMoreMediaData ?? false
}
class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {
var pixelBufferOut: CVPixelBuffer?
let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
if status != kCVReturnSuccess {
fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
}
let pixelBuffer = pixelBufferOut!
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
let data = CVPixelBufferGetBaseAddress(pixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
context!.clear(CGRect(x:0,y: 0,width: size.width,height: size.height))
let horizontalRatio = size.width / image.size.width
let verticalRatio = size.height / image.size.height
//aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)
let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0
context?.draw(image.cgImage!, in: CGRect(x:x,y: y, width: newSize.width, height: newSize.height))
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
}
init(renderSettings: RenderSettings) {
self.renderSettings = renderSettings
}
func start() {
let avOutputSettings: [String: Any] = [
AVVideoCodecKey: renderSettings.avCodecKey,
AVVideoWidthKey: NSNumber(value: Float(renderSettings.size.width)),
AVVideoHeightKey: NSNumber(value: Float(renderSettings.size.height))
]
func createPixelBufferAdaptor() {
let sourcePixelBufferAttributesDictionary = [
kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.size.width)),
kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.size.height))
]
pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
}
func createAssetWriter(outputURL: URL) -> AVAssetWriter {
guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileTypeMPEG4) else {
fatalError("AVAssetWriter() failed")
}
guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaTypeVideo) else {
fatalError("canApplyOutputSettings() failed")
}
return assetWriter
}
videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: avOutputSettings)
if videoWriter.canAdd(videoWriterInput) {
videoWriter.add(videoWriterInput)
}
else {
fatalError("canAddInput() returned false")
}
// The pixel buffer adaptor must be created before we start writing.
createPixelBufferAdaptor()
if videoWriter.startWriting() == false {
fatalError("startWriting() failed")
}
videoWriter.startSession(atSourceTime: kCMTimeZero)
precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
}
func render(appendPixelBuffers: ((VideoWriter)->Bool)?, completion: (()->Void)?) {
precondition(videoWriter != nil, "Call start() to initialze the writer")
let queue = DispatchQueue(label: "mediaInputQueue")
videoWriterInput.requestMediaDataWhenReady(on: queue) {
let isFinished = appendPixelBuffers?(self) ?? false
if isFinished {
self.videoWriterInput.markAsFinished()
self.videoWriter.finishWriting() {
DispatchQueue.main.async {
completion?()
}
}
}
else {
// Fall through. The closure will be called again when the writer is ready.
}
}
}
func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {
precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")
let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size)
return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
}
}