Configurar un efecto AudioUnit

Estoy intentando escribir una aplicación de iOS que captura el sonido del micrófono, lo pasa a través de un filter de paso alto y hace algunos cálculos sobre el sonido procesado. Basado en MicInput de Stefan Popp ( http://www.stefanpopp.de/2011/capture-iphone-microphone/ ), estoy tratando de poner una unidad de audio de efectos (más específicamente, una unidad de efecto de filter de paso alto) entre los input y la salida de la unidad de audio de E / S. Después de configurar dicha AU, me da un error 10877 ( kAudioUnitErr_InvalidElement ) cuando AudioUnitRender(fxAudioUnit, ...) en la callback de renderización de I / O AU.

AudioProcessingWithAudioUnitAPI.h

 // // AudioProcessingWithAudioUnitAPI.h // #import <Foundation/Foundation.h> #import <AudioToolbox/AudioToolbox.h> #import <AVFoundation/AVAudioSession.h> @interface AudioProcessingWithAudioUnitAPI : NSObject @property (readonly) AudioBuffer audioBuffer; @property (readonly) AudioComponentInstance audioUnit; @property (readonly) AudioComponentInstance fxAudioUnit; ... @end 

AudioProcessingWithAudioUnitAPI.m

 // // AudioProcessingWithAudioUnitAPI.m // #import "AudioProcessingWithAudioUnitAPI.h" @implementation AudioProcessingWithAudioUnitAPI @synthesize isPlaying = _isPlaying; @synthesize outputLevelDisplay = _outputLevelDisplay; @synthesize audioBuffer = _audioBuffer; @synthesize audioUnit = _audioUnit; @synthesize fxAudioUnit = _fxAudioUnit; ... #pragma mark Recording callback static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { // the data gets rendenetworking here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessingWithAudioUnitAPI *audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * 2; // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * 2 ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; 

Y en la próxima llamada AudioUnitRender es donde se genera el error 10887:

  status = AudioUnitRender([audioProcessor fxAudioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; 

 // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; //do some further processing // clean up the buffer free(bufferList.mBuffers[0].mData); return noErr; } #pragma mark FX AudioUnit render callback //This just asks for samples to the microphone (I/O AU render) static OSStatus fxAudioUnitRenderCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { OSStatus retorno; AudioProcessingWithAudioUnitAPI* audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*)inRefCon; retorno = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, ioData); [audioProcessor hasError:retorno:__FILE__:__LINE__]; return retorno; } #pragma mark Playback callback static OSStatus playbackCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { /** This is the reference to the object who owns the callback. */ AudioProcessingWithAudioUnitAPI *audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*) inRefCon; // iterate over incoming stream and copy to output stream for (int i=0; i < ioData->mNumberBuffers; i++) { AudioBuffer buffer = ioData->mBuffers[i]; // find minimum size UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize); // copy buffer to audio buffer which gets played after function return memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size); // set data size buffer.mDataByteSize = size; } return noErr; } #pragma mark - objective-c class methods -(AudioProcessingWithAudioUnitAPI*)init { self = [super init]; if (self) { self.isPlaying = NO; [self initializeAudio]; } return self; } -(void)initializeAudio { OSStatus status; // We define the audio component AudioComponentDescription desc; desc.componentType = kAudioUnitType_Output; // we want to ouput desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput desc.componentFlags = 0; // must be zero desc.componentFlagsMask = 0; // must be zero desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider // find the AU component by description AudioComponent component = AudioComponentFindNext(NULL, &desc); // create audio unit by component status = AudioComponentInstanceNew(component, &_audioUnit); [self hasError:status:__FILE__:__LINE__]; // and now for the fx AudioUnit desc.componentType = kAudioUnitType_Effect; desc.componentSubType = kAudioUnitSubType_HighPassFilter; // find the AU component by description component = AudioComponentFindNext(NULL, &desc); // create audio unit by component status = AudioComponentInstanceNew(component, &_fxAudioUnit); [self hasError:status:__FILE__:__LINE__]; // define that we want record io on the input bus AudioUnitElement inputElement = 1; AudioUnitElement outputElement = 0; UInt32 flag = 1; status = AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, // use io kAudioUnitScope_Input, // scope to input inputElement, // select input bus (1) &flag, // set flag sizeof(flag)); [self hasError:status:__FILE__:__LINE__]; UInt32 anotherFlag = 0; // disable output (I don't want to hear back from the device) status = AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, // use io kAudioUnitScope_Output, // scope to output outputElement, // select output bus (0) &anotherFlag, // set flag sizeof(flag)); [self hasError:status:__FILE__:__LINE__]; /* We need to specify our format on which we want to work. We use Linear PCM cause its uncompressed and we work on raw data. for more informations check. We want 16 bits, 2 bytes per packet/frames at 44khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; //65536 audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2; // set the format on the output stream status = AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, inputElement, &audioFormat, sizeof(audioFormat)); [self hasError:status:__FILE__:__LINE__]; // set the format on the input stream status = AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, outputElement, &audioFormat, sizeof(audioFormat)); [self hasError:status:__FILE__:__LINE__]; /** We need to define a callback structure which holds a pointer to the recordingCallback and a reference to the audio processor object */ AURenderCallbackStruct callbackStruct; // set recording callback callbackStruct.inputProc = recordingCallback; // recordingCallback pointer callbackStruct.inputProcRefCon = (__bridge void*)self; // set input callback to recording callback on the input bus status = AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, inputElement, &callbackStruct, sizeof(callbackStruct)); [self hasError:status:__FILE__:__LINE__]; /* We do the same on the output stream to hear what is coming from the input stream */ callbackStruct.inputProc = playbackCallback; callbackStruct.inputProcRefCon = (__bridge void*)self; // set playbackCallback as callback on our renderer for the output bus status = AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, outputElement, &callbackStruct, sizeof(callbackStruct)); [self hasError:status:__FILE__:__LINE__]; callbackStruct.inputProc = fxAudioUnitRenderCallback; callbackStruct.inputProcRefCon = (__bridge void*)self; // set input callback to input AU status = AudioUnitSetProperty(self.fxAudioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, 0, &callbackStruct, sizeof(callbackStruct)); [self hasError:status:__FILE__:__LINE__]; // reset flag to 0 flag = 0; /* we need to tell the audio unit to allocate the render buffer, that we can directly write into it. */ status = AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_ShouldAllocateBuffer, kAudioUnitScope_Output, inputElement, &flag, sizeof(flag)); status = AudioUnitSetProperty(self.fxAudioUnit, kAudioUnitProperty_ShouldAllocateBuffer, kAudioUnitScope_Output, 0, &flag, sizeof(flag)); /* we set the number of channels to mono and allocate our block size to 1024 bytes. */ _audioBuffer.mNumberChannels = 1; _audioBuffer.mDataByteSize = 512 * 2; _audioBuffer.mData = malloc( 512 * 2 ); // Initialize the Audio Unit and cross fingers =) status = AudioUnitInitialize(self.fxAudioUnit); [self hasError:status:__FILE__:__LINE__]; status = AudioUnitInitialize(self.audioUnit); [self hasError:status:__FILE__:__LINE__]; NSLog(@"Started"); } //For now, this just copies the buffer to self.audioBuffer -(void)processBuffer: (AudioBufferList*) audioBufferList { AudioBuffer sourceBuffer = audioBufferList->mBuffers[0]; // we check here if the input data byte size has changed if (_audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize) { // clear old buffer free(self.audioBuffer.mData); // assing new byte size and allocate them on mData _audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize; _audioBuffer.mData = malloc(sourceBuffer.mDataByteSize); } // copy incoming audio data to the audio buffer memcpy(self.audioBuffer.mData, audioBufferList->mBuffers[0].mData, audioBufferList->mBuffers[0].mDataByteSize); } #pragma mark - Error handling -(void)hasError:(int)statusCode:(char*)file:(int)line { if (statusCode) { printf("Error Code responded %d in file %s on line %d\n", statusCode, file, line); exit(-1); } } @end 

Cualquier ayuda será apreciada.

Este tipo de pregunta surge con cierta frecuencia, así que una vez escribí un mini-tutorial sobre este tema . Sin embargo, esta guía es realmente la manera más fácil de resolver el problema, ahora siento que una manera mucho más elegante es usar el marco de Novocaine , lo que conlleva mucho dolor de cabeza en la configuration de AudioUnit en iOS.

Encontré códigos de demostración, tal vez 4 U útiles;

DEMO URL: https://github.com/JNYJdev/AudioUnit

O

blog: http://atastypixel.com/blog/using-remoteio-audio-unit/

 static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { // Because of the way our audio format (setup below) is chosen: // we only need 1 buffer, since it is mono // Samples are 16 bits = 2 bytes. // 1 frame includes only 1 sample AudioBuffer buffer; buffer.mNumberChannels = 1; buffer.mDataByteSize = inNumberFrames * 2; buffer.mData = malloc( inNumberFrames * 2 ); // Put buffer in a AudioBufferList AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // Then: // Obtain recorded samples OSStatus status; status = AudioUnitRender([iosAudio audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); checkStatus(status); // Now, we have the samples we just read sitting in buffers in bufferList // Process the new data [iosAudio processAudio:&bufferList]; // release the malloc'ed data in the buffer we created earlier free(bufferList.mBuffers[0].mData); return noErr; 

}