[iphone] AVAssetReader로 파형 그리기

assetUrl을 사용하여 iPod 라이브러리에서 노래를 읽습니다 (코드에서는 audioUrl이라고 함). 여러 가지 방법으로 재생할 수 있고, 잘라낼 수 있으며,이 작업을 진행할 수는 있지만 … 이걸로 무엇을할지 정말 모르겠습니다. CMSampleBufferRef는 파형을 그리기위한 데이터를 얻습니다! 피크 값에 대한 정보가 필요합니다. 어떻게이 값을 얻을 수 있습니까?

    AVAssetTrack * songTrack = [audioUrl.tracks objectAtIndex:0];
    AVAssetReaderTrackOutput * output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:nil];
    [reader addOutput:output];
    [output release];

    NSMutableData * fullSongData = [[NSMutableData alloc] init];
    [reader startReading];

    while (reader.status == AVAssetReaderStatusReading){

        AVAssetReaderTrackOutput * trackOutput =
        (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];

        CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer];

        if (sampleBufferRef){/* what I gonna do with this? */}

제발 도와주세요!



답변

나는 비슷한 것을 찾고 있었고 “내 자신의 것을 굴리기”로 결정했습니다. 나는 이것이 오래된 게시물이라는 것을 알고 있지만 다른 사람이 이것을 찾고 있다면 여기에 내 해결책이 있습니다. 비교적 빠르고 더럽고 이미지를 “풀 스케일”로 정규화합니다. 생성 된 이미지는 “넓음”입니다. 즉, UIScrollView에 배치하거나 그렇지 않으면 디스플레이를 관리해야합니다.

이것은 이 질문에 주어진 몇 가지 답변을 기반으로합니다

샘플 출력

샘플 파형

편집 : 평균화 및 렌더링 방법의 로그 버전을 추가했습니다. 대체 버전 및 비교 출력은이 메시지의 끝을 참조하십시오. 개인적으로 원래 선형 버전을 선호하지만 누군가 사용 된 알고리즘을 개선 할 수있는 경우를 대비하여 게시하기로 결정했습니다.

다음 가져 오기가 필요합니다.

#import <MediaPlayer/MediaPlayer.h>
#import <AVFoundation/AVFoundation.h>


첫째, 평균 샘플 데이터에 대한 포인터를 가져와
UIImage를 반환하는 일반 렌더링 메서드입니다 . 이러한 샘플은 재생 가능한 오디오 샘플이 아닙니다.

-(UIImage *) audioImageGraph:(SInt16 *) samples
                normalizeMax:(SInt16) normalizeMax
                 sampleCount:(NSInteger) sampleCount
                channelCount:(NSInteger) channelCount
                 imageHeight:(float) imageHeight {

    CGSize imageSize = CGSizeMake(sampleCount, imageHeight);
    UIGraphicsBeginImageContext(imageSize);
    CGContextRef context = UIGraphicsGetCurrentContext();

    CGContextSetFillColorWithColor(context, [UIColor blackColor].CGColor);
    CGContextSetAlpha(context,1.0);
    CGRect rect;
    rect.size = imageSize;
    rect.origin.x = 0;
    rect.origin.y = 0;

    CGColorRef leftcolor = [[UIColor whiteColor] CGColor];
    CGColorRef rightcolor = [[UIColor redColor] CGColor];

    CGContextFillRect(context, rect);

    CGContextSetLineWidth(context, 1.0);

    float halfGraphHeight = (imageHeight / 2) / (float) channelCount ;
    float centerLeft = halfGraphHeight;
    float centerRight = (halfGraphHeight*3) ;
    float sampleAdjustmentFactor = (imageHeight/ (float) channelCount) / (float) normalizeMax;

    for (NSInteger intSample = 0 ; intSample < sampleCount ; intSample ++ ) {
        SInt16 left = *samples++;
        float pixels = (float) left;
        pixels *= sampleAdjustmentFactor;
        CGContextMoveToPoint(context, intSample, centerLeft-pixels);
        CGContextAddLineToPoint(context, intSample, centerLeft+pixels);
        CGContextSetStrokeColorWithColor(context, leftcolor);
        CGContextStrokePath(context);

        if (channelCount==2) {
            SInt16 right = *samples++;
            float pixels = (float) right;
            pixels *= sampleAdjustmentFactor;
            CGContextMoveToPoint(context, intSample, centerRight - pixels);
            CGContextAddLineToPoint(context, intSample, centerRight + pixels);
            CGContextSetStrokeColorWithColor(context, rightcolor);
            CGContextStrokePath(context);
        }
    }

    // Create new image
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();

    // Tidy up
    UIGraphicsEndImageContext();

    return newImage;
}

다음으로 AVURLAsset을 가져와 PNG 이미지 데이터를 반환하는 메서드

- (NSData *) renderPNGAudioPictogramForAsset:(AVURLAsset *)songAsset {

    NSError * error = nil;
    AVAssetReader * reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
    AVAssetTrack * songTrack = [songAsset.tracks objectAtIndex:0];

    NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys:
                                        [NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
                                        //     [NSNumber numberWithInt:44100.0],AVSampleRateKey, /*Not Supported*/
                                        //     [NSNumber numberWithInt: 2],AVNumberOfChannelsKey,    /*Not Supported*/
                                        [NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved,
                                        nil];

    AVAssetReaderTrackOutput* output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict];

    [reader addOutput:output];
    [output release];

    UInt32 sampleRate,channelCount;

    NSArray* formatDesc = songTrack.formatDescriptions;
    for(unsigned int i = 0; i < [formatDesc count]; ++i) {
        CMAudioFormatDescriptionRef item = (CMAudioFormatDescriptionRef)[formatDesc objectAtIndex:i];
        const AudioStreamBasicDescription* fmtDesc = CMAudioFormatDescriptionGetStreamBasicDescription (item);
        if(fmtDesc ) {

            sampleRate = fmtDesc->mSampleRate;
            channelCount = fmtDesc->mChannelsPerFrame;

            //    NSLog(@"channels:%u, bytes/packet: %u, sampleRate %f",fmtDesc->mChannelsPerFrame, fmtDesc->mBytesPerPacket,fmtDesc->mSampleRate);
        }
    }

    UInt32 bytesPerSample = 2 * channelCount;
    SInt16 normalizeMax = 0;

    NSMutableData * fullSongData = [[NSMutableData alloc] init];
    [reader startReading];

    UInt64 totalBytes = 0;
    SInt64 totalLeft = 0;
    SInt64 totalRight = 0;
    NSInteger sampleTally = 0;

    NSInteger samplesPerPixel = sampleRate / 50;

    while (reader.status == AVAssetReaderStatusReading){

        AVAssetReaderTrackOutput * trackOutput = (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];
        CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer];

        if (sampleBufferRef){
            CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBufferRef);

            size_t length = CMBlockBufferGetDataLength(blockBufferRef);
            totalBytes += length;

            NSAutoreleasePool *wader = [[NSAutoreleasePool alloc] init];

            NSMutableData * data = [NSMutableData dataWithLength:length];
            CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, data.mutableBytes);

            SInt16 * samples = (SInt16 *) data.mutableBytes;
            int sampleCount = length / bytesPerSample;
            for (int i = 0; i < sampleCount ; i ++) {

                SInt16 left = *samples++;
                totalLeft  += left;

                SInt16 right;
                if (channelCount==2) {
                    right = *samples++;
                    totalRight += right;
                }

                sampleTally++;

                if (sampleTally > samplesPerPixel) {

                    left  = totalLeft / sampleTally;

                    SInt16 fix = abs(left);
                    if (fix > normalizeMax) {
                        normalizeMax = fix;
                    }

                    [fullSongData appendBytes:&left length:sizeof(left)];

                    if (channelCount==2) {
                        right = totalRight / sampleTally;

                        SInt16 fix = abs(right);
                        if (fix > normalizeMax) {
                            normalizeMax = fix;
                        }

                        [fullSongData appendBytes:&right length:sizeof(right)];
                    }

                    totalLeft   = 0;
                    totalRight  = 0;
                    sampleTally = 0;
                }
            }

           [wader drain];

            CMSampleBufferInvalidate(sampleBufferRef);
            CFRelease(sampleBufferRef);
        }
    }

    NSData * finalData = nil;

    if (reader.status == AVAssetReaderStatusFailed || reader.status == AVAssetReaderStatusUnknown){
        // Something went wrong. return nil

        return nil;
    }

    if (reader.status == AVAssetReaderStatusCompleted){

        NSLog(@"rendering output graphics using normalizeMax %d",normalizeMax);

        UIImage *test = [self audioImageGraph:(SInt16 *)
                         fullSongData.bytes
                                 normalizeMax:normalizeMax
                                  sampleCount:fullSongData.length / 4
                                 channelCount:2
                                  imageHeight:100];

        finalData = imageToData(test);
    }

    [fullSongData release];
    [reader release];

    return finalData;
}


고급 옵션 :

마지막으로 AVAudioPlayer를 사용하여 오디오를 재생하려면 앱의 번들 캐시 폴더에이를 캐시해야합니다. 그 이후로 이미지 데이터도 캐시하기로 결정하고 모든 것을 UIImage 카테고리로 래핑했습니다. 오디오를 추출하려면 이 오픈 소스 오퍼링 을 포함해야하며 일부 백그라운드 스레딩 기능을 처리하려면 여기 에서 일부 코드 를 포함해야 합니다.


첫째, 경로 이름 등을 처리하기위한 몇 가지 정의 및 몇 가지 일반 클래스 메서드

//#define imgExt @"jpg"
//#define imageToData(x) UIImageJPEGRepresentation(x,4)

#define imgExt @"png"
#define imageToData(x) UIImagePNGRepresentation(x)

+ (NSString *) assetCacheFolder  {
    NSArray  *assetFolderRoot = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES);
    return [NSString stringWithFormat:@"%@/audio", [assetFolderRoot objectAtIndex:0]];
 }

+ (NSString *) cachedAudioPictogramPathForMPMediaItem:(MPMediaItem*) item {
    NSString *assetFolder = [[self class] assetCacheFolder];
    NSNumber * libraryId = [item valueForProperty:MPMediaItemPropertyPersistentID];
    NSString *assetPictogramFilename = [NSString stringWithFormat:@"asset_%@.%@",libraryId,imgExt];
    return [NSString stringWithFormat:@"%@/%@", assetFolder, assetPictogramFilename];
}

+ (NSString *) cachedAudioFilepathForMPMediaItem:(MPMediaItem*) item {
    NSString *assetFolder = [[self class] assetCacheFolder];

    NSURL    * assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL];
    NSNumber * libraryId = [item valueForProperty:MPMediaItemPropertyPersistentID];

    NSString *assetFileExt = [[[assetURL path] lastPathComponent] pathExtension];
    NSString *assetFilename = [NSString stringWithFormat:@"asset_%@.%@",libraryId,assetFileExt];
    return [NSString stringWithFormat:@"%@/%@", assetFolder, assetFilename];
}

+ (NSURL *) cachedAudioURLForMPMediaItem:(MPMediaItem*) item {
    NSString *assetFilepath = [[self class] cachedAudioFilepathForMPMediaItem:item];
    return [NSURL fileURLWithPath:assetFilepath];
}


이제 “비즈니스”를 수행하는 init 메소드

- (id) initWithMPMediaItem:(MPMediaItem*) item
           completionBlock:(void (^)(UIImage* delayedImagePreparation))completionBlock  {

    NSFileManager *fman = [NSFileManager defaultManager];
    NSString *assetPictogramFilepath = [[self class] cachedAudioPictogramPathForMPMediaItem:item];

    if ([fman fileExistsAtPath:assetPictogramFilepath]) {

        NSLog(@"Returning cached waveform pictogram: %@",[assetPictogramFilepath lastPathComponent]);

        self = [self initWithContentsOfFile:assetPictogramFilepath];
        return self;
    }

    NSString *assetFilepath = [[self class] cachedAudioFilepathForMPMediaItem:item];

    NSURL *assetFileURL = [NSURL fileURLWithPath:assetFilepath];

    if ([fman fileExistsAtPath:assetFilepath]) {

        NSLog(@"scanning cached audio data to create UIImage file: %@",[assetFilepath lastPathComponent]);

        [assetFileURL retain];
        [assetPictogramFilepath retain];

        [NSThread MCSM_performBlockInBackground: ^{

            AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:assetFileURL options:nil];
            NSData *waveFormData = [self renderPNGAudioPictogramForAsset:asset];

            [waveFormData writeToFile:assetPictogramFilepath atomically:YES];

            [assetFileURL release];
            [assetPictogramFilepath release];

            if (completionBlock) {

                [waveFormData retain];
                [NSThread MCSM_performBlockOnMainThread:^{

                    UIImage *result = [UIImage imageWithData:waveFormData];

                    NSLog(@"returning rendered pictogram on main thread (%d bytes %@ data in UIImage %0.0f x %0.0f pixels)",waveFormData.length,[imgExt uppercaseString],result.size.width,result.size.height);

                    completionBlock(result);

                    [waveFormData release];
                }];
            }
        }];

        return nil;

    } else {

        NSString *assetFolder = [[self class] assetCacheFolder];

        [fman createDirectoryAtPath:assetFolder withIntermediateDirectories:YES attributes:nil error:nil];

        NSLog(@"Preparing to import audio asset data %@",[assetFilepath lastPathComponent]);

        [assetPictogramFilepath retain];
        [assetFileURL retain];

        TSLibraryImport* import = [[TSLibraryImport alloc] init];
        NSURL    * assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL];

         [import importAsset:assetURL toURL:assetFileURL completionBlock:^(TSLibraryImport* import) {
            //check the status and error properties of
            //TSLibraryImport

            if (import.error) {

                NSLog (@"audio data import failed:%@",import.error);

            } else{
                NSLog (@"Creating waveform pictogram file: %@", [assetPictogramFilepath lastPathComponent]);
                AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:assetFileURL options:nil];
                NSData *waveFormData = [self renderPNGAudioPictogramForAsset:asset];

                [waveFormData writeToFile:assetPictogramFilepath atomically:YES];

                if (completionBlock) {
                     [waveFormData retain];
                    [NSThread MCSM_performBlockOnMainThread:^{

                        UIImage *result = [UIImage imageWithData:waveFormData];
                        NSLog(@"returning rendered pictogram on main thread (%d bytes %@ data in UIImage %0.0f x %0.0f pixels)",waveFormData.length,[imgExt uppercaseString],result.size.width,result.size.height);

                        completionBlock(result);

                        [waveFormData release];
                    }];
                }
            }

            [assetPictogramFilepath release];
            [assetFileURL release];

        }  ];

        return nil;
    }
}


이것을 호출하는 예 :

-(void) importMediaItem {

    MPMediaItem* item = [self mediaItem];

    // since we will be needing this for playback, save the url to the cached audio.
    [url release];
    url = [[UIImage cachedAudioURLForMPMediaItem:item] retain];

    [waveFormImage release];

    waveFormImage = [[UIImage alloc ] initWithMPMediaItem:item completionBlock:^(UIImage* delayedImagePreparation){

        waveFormImage = [delayedImagePreparation retain];
        [self displayWaveFormImage];
    }];

    if (waveFormImage) {
        [waveFormImage retain];
        [self displayWaveFormImage];
    }
}

평균화 및 렌더링 방법의 로그 버전

#define absX(x) (x<0?0-x:x)
#define minMaxX(x,mn,mx) (x<=mn?mn:(x>=mx?mx:x))
#define noiseFloor (-90.0)
#define decibel(amplitude) (20.0 * log10(absX(amplitude)/32767.0))

-(UIImage *) audioImageLogGraph:(Float32 *) samples
                normalizeMax:(Float32) normalizeMax
                 sampleCount:(NSInteger) sampleCount
                channelCount:(NSInteger) channelCount
                 imageHeight:(float) imageHeight {

    CGSize imageSize = CGSizeMake(sampleCount, imageHeight);
    UIGraphicsBeginImageContext(imageSize);
    CGContextRef context = UIGraphicsGetCurrentContext();

    CGContextSetFillColorWithColor(context, [UIColor blackColor].CGColor);
    CGContextSetAlpha(context,1.0);
    CGRect rect;
    rect.size = imageSize;
    rect.origin.x = 0;
    rect.origin.y = 0;

    CGColorRef leftcolor = [[UIColor whiteColor] CGColor];
    CGColorRef rightcolor = [[UIColor redColor] CGColor];

    CGContextFillRect(context, rect);

    CGContextSetLineWidth(context, 1.0);

    float halfGraphHeight = (imageHeight / 2) / (float) channelCount ;
    float centerLeft = halfGraphHeight;
    float centerRight = (halfGraphHeight*3) ;
    float sampleAdjustmentFactor = (imageHeight/ (float) channelCount) / (normalizeMax - noiseFloor) / 2;

    for (NSInteger intSample = 0 ; intSample < sampleCount ; intSample ++ ) {
        Float32 left = *samples++;
        float pixels = (left - noiseFloor) * sampleAdjustmentFactor;
        CGContextMoveToPoint(context, intSample, centerLeft-pixels);
        CGContextAddLineToPoint(context, intSample, centerLeft+pixels);
        CGContextSetStrokeColorWithColor(context, leftcolor);
        CGContextStrokePath(context);

        if (channelCount==2) {
            Float32 right = *samples++;
            float pixels = (right - noiseFloor) * sampleAdjustmentFactor;
            CGContextMoveToPoint(context, intSample, centerRight - pixels);
            CGContextAddLineToPoint(context, intSample, centerRight + pixels);
            CGContextSetStrokeColorWithColor(context, rightcolor);
            CGContextStrokePath(context);
        }
    }

    // Create new image
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();

    // Tidy up
    UIGraphicsEndImageContext();

    return newImage;
}

- (NSData *) renderPNGAudioPictogramLogForAsset:(AVURLAsset *)songAsset {

    NSError * error = nil;
    AVAssetReader * reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
    AVAssetTrack * songTrack = [songAsset.tracks objectAtIndex:0];

    NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys:
                                        [NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
                                        //     [NSNumber numberWithInt:44100.0],AVSampleRateKey, /*Not Supported*/
                                        //     [NSNumber numberWithInt: 2],AVNumberOfChannelsKey,    /*Not Supported*/

                                        [NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved,
                                        nil];

    AVAssetReaderTrackOutput* output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict];

    [reader addOutput:output];
    [output release];

    UInt32 sampleRate,channelCount;

    NSArray* formatDesc = songTrack.formatDescriptions;
    for(unsigned int i = 0; i < [formatDesc count]; ++i) {
        CMAudioFormatDescriptionRef item = (CMAudioFormatDescriptionRef)[formatDesc objectAtIndex:i];
        const AudioStreamBasicDescription* fmtDesc = CMAudioFormatDescriptionGetStreamBasicDescription (item);
        if(fmtDesc ) {

            sampleRate = fmtDesc->mSampleRate;
            channelCount = fmtDesc->mChannelsPerFrame;

            //    NSLog(@"channels:%u, bytes/packet: %u, sampleRate %f",fmtDesc->mChannelsPerFrame, fmtDesc->mBytesPerPacket,fmtDesc->mSampleRate);
        }
    }

    UInt32 bytesPerSample = 2 * channelCount;
    Float32 normalizeMax = noiseFloor;
    NSLog(@"normalizeMax = %f",normalizeMax);
    NSMutableData * fullSongData = [[NSMutableData alloc] init];
    [reader startReading];

    UInt64 totalBytes = 0;
    Float64 totalLeft = 0;
    Float64 totalRight = 0;
    Float32 sampleTally = 0;

    NSInteger samplesPerPixel = sampleRate / 50;

    while (reader.status == AVAssetReaderStatusReading){

        AVAssetReaderTrackOutput * trackOutput = (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];
        CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer];

        if (sampleBufferRef){
            CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBufferRef);

            size_t length = CMBlockBufferGetDataLength(blockBufferRef);
            totalBytes += length;

            NSAutoreleasePool *wader = [[NSAutoreleasePool alloc] init];

            NSMutableData * data = [NSMutableData dataWithLength:length];
            CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, data.mutableBytes);

            SInt16 * samples = (SInt16 *) data.mutableBytes;
            int sampleCount = length / bytesPerSample;
            for (int i = 0; i < sampleCount ; i ++) {

                Float32 left = (Float32) *samples++;
                left = decibel(left);
                left = minMaxX(left,noiseFloor,0);
                totalLeft  += left;

                Float32 right;
                if (channelCount==2) {
                    right = (Float32) *samples++;
                    right = decibel(right);
                    right = minMaxX(right,noiseFloor,0);
                    totalRight += right;
                }

                sampleTally++;

                if (sampleTally > samplesPerPixel) {

                    left  = totalLeft / sampleTally;
                    if (left > normalizeMax) {
                        normalizeMax = left;
                    }

                   // NSLog(@"left average = %f, normalizeMax = %f",left,normalizeMax);

                    [fullSongData appendBytes:&left length:sizeof(left)];

                    if (channelCount==2) {
                        right = totalRight / sampleTally;

                        if (right > normalizeMax) {
                            normalizeMax = right;
                        }

                        [fullSongData appendBytes:&right length:sizeof(right)];
                    }

                    totalLeft   = 0;
                    totalRight  = 0;
                    sampleTally = 0;
                }
            }

           [wader drain];

            CMSampleBufferInvalidate(sampleBufferRef);
            CFRelease(sampleBufferRef);
        }
    }

    NSData * finalData = nil;

    if (reader.status == AVAssetReaderStatusFailed || reader.status == AVAssetReaderStatusUnknown){
        // Something went wrong. Handle it.
    }

    if (reader.status == AVAssetReaderStatusCompleted){
        // You're done. It worked.

        NSLog(@"rendering output graphics using normalizeMax %f",normalizeMax);

        UIImage *test = [self audioImageLogGraph:(Float32 *) fullSongData.bytes
                                 normalizeMax:normalizeMax
                                  sampleCount:fullSongData.length / (sizeof(Float32) * 2)
                                 channelCount:2
                                  imageHeight:100];

        finalData = imageToData(test);
    }

    [fullSongData release];
    [reader release];

    return finalData;
}

비교 출력

선의

Acme Swing Company의 “Warm It Up”시작에 대한 선형 플롯

대수

Acme Swing Company의 “Warm It Up”시작에 대한 로그 플롯


답변

sampleBuffRef에서 오디오 버퍼를 가져온 다음 해당 값을 반복하여 파형을 만들 수 있어야합니다.

CMBlockBufferRef buffer = CMSampleBufferGetDataBuffer( sampleBufferRef );
CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples(sampleBufferRef);
AudioBufferList audioBufferList;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
                                                            sampleBufferRef,
                                                            NULL,
                                                            &audioBufferList,
                                                            sizeof(audioBufferList),
                                                            NULL,
                                                            NULL,
                                                                  kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
                                                            &buffer
                                                            );

// this copies your audio out to a temp buffer but you should be able to iterate through this buffer instead
SInt32* readBuffer = (SInt32 *)malloc(numSamplesInBuffer * sizeof(SInt32));
memcpy( readBuffer, audioBufferList.mBuffers[0].mData, numSamplesInBuffer*sizeof(SInt32));


답변

Swift 5와 AVAudioFile을 사용하는 또 다른 접근 방식 :

///Gets the audio file from an URL, downsaples and draws into the sound layer.
func drawSoundWave(fromURL url:URL, fromPosition:Int64, totalSeconds:UInt32, samplesSecond:CGFloat) throws{

    print("\(logClassName) Drawing sound from \(url)")

    do{
        waveViewInfo.samplesSeconds = samplesSecond

        //Get audio file and format from URL
        let audioFile = try AVAudioFile(forReading: url)

        waveViewInfo.format = audioFile.processingFormat
        audioFile.framePosition = fromPosition * Int64(waveViewInfo.format.sampleRate)

        //Getting the buffer
        let frameCapacity:UInt32 = totalSeconds * UInt32(waveViewInfo.format.sampleRate)

        guard let audioPCMBuffer = AVAudioPCMBuffer(pcmFormat: waveViewInfo.format, frameCapacity: frameCapacity) else{ throw AppError("Unable to get the AVAudioPCMBuffer") }
        try audioFile.read(into: audioPCMBuffer, frameCount: frameCapacity)
        let audioPCMBufferFloatValues:[Float] = Array(UnsafeBufferPointer(start: audioPCMBuffer.floatChannelData?.pointee,
                                                                          count: Int(audioPCMBuffer.frameLength)))

        waveViewInfo.points = []
        waveViewInfo.maxValue = 0
        for index in stride(from: 0, to: audioPCMBufferFloatValues.count, by: Int(audioFile.fileFormat.sampleRate) / Int(waveViewInfo.samplesSeconds)){

            let aSample = CGFloat(audioPCMBufferFloatValues[index])
            waveViewInfo.points.append(aSample)
            let fix = abs(aSample)
            if fix > waveViewInfo.maxValue{
                waveViewInfo.maxValue = fix
            }

        }

        print("\(logClassName) Finished the points - Count = \(waveViewInfo.points.count) / Max = \(waveViewInfo.maxValue)")

        populateSoundImageView(with: waveViewInfo)

    }
    catch{

        throw error

    }

}

///Converts the sound wave in to a UIImage
func populateSoundImageView(with waveViewInfo:WaveViewInfo){

    let imageSize:CGSize = CGSize(width: CGFloat(waveViewInfo.points.count),//CGFloat(waveViewInfo.points.count) * waveViewInfo.sampleSpace,
                                  height: frame.height)
    let drawingRect = CGRect(origin: .zero, size: imageSize)

    UIGraphicsBeginImageContextWithOptions(imageSize, false, 0)
    defer {
        UIGraphicsEndImageContext()
    }
    print("\(logClassName) Converting sound view in rect \(drawingRect)")

    guard let context:CGContext = UIGraphicsGetCurrentContext() else{ return }

    context.setFillColor(waveViewInfo.backgroundColor.cgColor)
    context.setAlpha(1.0)
    context.fill(drawingRect)
    context.setLineWidth(1.0)
    //        context.setLineWidth(waveViewInfo.lineWidth)

    let sampleAdjustFactor = imageSize.height / waveViewInfo.maxValue
    for pointIndex in waveViewInfo.points.indices{

        let pixel = waveViewInfo.points[pointIndex] * sampleAdjustFactor

        context.move(to: CGPoint(x: CGFloat(pointIndex), y: middleY - pixel))
        context.addLine(to: CGPoint(x: CGFloat(pointIndex), y: middleY + pixel))

        context.setStrokeColor(waveViewInfo.strokeColor.cgColor)
        context.strokePath()

    }

     //        for pointIndex in waveViewInfo.points.indices{
    //
    //            let pixel = waveViewInfo.points[pointIndex] * sampleAdjustFactor
    //
    //            context.move(to: CGPoint(x: CGFloat(pointIndex) * waveViewInfo.sampleSpace, y: middleY - pixel))
    //            context.addLine(to: CGPoint(x: CGFloat(pointIndex) * waveViewInfo.sampleSpace, y: middleY + pixel))
    //
    //            context.setStrokeColor(waveViewInfo.strokeColor.cgColor)
    //            context.strokePath()
    //
    //        }

    //        var xIncrement:CGFloat = 0
    //        for point in waveViewInfo.points{
    //
    //            let normalizedPoint = point * sampleAdjustFactor
    //
    //            context.move(to: CGPoint(x: xIncrement, y: middleY - normalizedPoint))
    //            context.addLine(to: CGPoint(x: xIncrement, y: middleX + normalizedPoint))
    //            context.setStrokeColor(waveViewInfo.strokeColor.cgColor)
    //            context.strokePath()
    //
    //            xIncrement += waveViewInfo.sampleSpace
    //
    //        }

    guard let soundWaveImage = UIGraphicsGetImageFromCurrentImageContext() else{ return }

    soundWaveImageView.image = soundWaveImage
    //        //In case of handling sample space in for
    //        updateWidthConstraintValue(soundWaveImage.size.width)
    updateWidthConstraintValue(soundWaveImage.size.width * waveViewInfo.sampleSpace)

}

어디

class WaveViewInfo {

    var format:AVAudioFormat!
    var samplesSeconds:CGFloat = 50
    var lineWidth:CGFloat = 0.20
    var sampleSpace:CGFloat = 0.20

    var strokeColor:UIColor = .red
    var backgroundColor:UIColor = .clear

    var maxValue:CGFloat = 0
    var points:[CGFloat] = [CGFloat]()

}

현재는 하나의 음파 만 인쇄하지만 확장 할 수 있습니다. 좋은 부분은 오디오 트랙을 부분별로 인쇄 할 수 있다는 것입니다.


답변