通过AudioRecord过程详解Android Audio系统

   日期:2020-11-15     浏览:108    评论:0    
核心提示:从Android API AudioRecorder构造过程开始:构造函数调用了native_setup native函数,在android_media_AudioRecorder.cpp jni文件中,基本的API接口都对应了native接口。 private native final int native_setup(Object audiorecord_this, Object attributes, .

从Android API AudioRecorder构造过程开始:

构造函数调用了native_setup  native函数,在android_media_AudioRecorder.cpp jni文件中,基本的API接口都对应了native接口。

    private native final int native_setup(Object audiorecord_this,
            Object  attributes,
            int[] sampleRate, int channelMask, int channelIndexMask, int audioFormat,
            int buffSizeInBytes, int[] sessionId, String opPackageName,
            long nativeRecordInJavaObj);

    private native final int native_start(int syncEvent, int sessionId);

    private native final int native_read_in_byte_array(byte[] audioData,
            int offsetInBytes, int sizeInBytes, boolean isBlocking);

android_media_AudioRecord_setup

lpRecorder = new AudioRecord(String16(opPackageNameStr.c_str()));

lpRecorder->set,调用了openRecord_l,并且访问AF openRecord拿到IAudioRecord。

sp<IAudioRecord> AudioFlinger::openRecord

recordingAllowed(opPackageName, tid, clientUid) 根据包名等检查权限;

RecordThread *thread = checkRecordThread_l(input) 根据audio_io_handle从mRecordThreads中拿到recordthread

client = registerPid(pid) 创建了一个AudioFlinger::Client对象,clinet主要包含一块1024*1024的内存,将client加入mClients.add(pid, client);

AudioFlinger::RecordThread::createRecordTrack_l 使用recordthread创建track

recordHandle = new RecordHandle(recordTrack); 利用recordtrack生成recordHandle

return recordHandle ; RecordHandle继承自BnAudioRecord,返回类型为sp<IAudioRecord>的bindler对象,返回给AudioRecorder中使用

IAudioFlinger中是AudioFlinger的Bindler调用接口,client层也就是AudioRecorder.cpp运行的层,通过AudioSystem提供的接口拿到AF的bindler对象,实际过程是通过ServiceManager查找Bindler服务得到bindler对象,同时在AudioSystem保存了静态值。

sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger()

AF的bindler对象执行openRcorder的操作,是一个标准的Bindler调用,删除了很多代码仅保留Bindler调用,data和reply分别是调用的序列化和返回的序列化,执行remote()->transact进行Bindler调用,返回了IAudioRecord Bindler对象。

注意Bindler调用的code为OPEN_RECORD,可根据code在服务端区分调用;

    virtual sp<IAudioRecord> openRecord(
                                audio_io_handle_t input,
                                uint32_t sampleRate,
                                audio_format_t format,...)
    {
        Parcel data, reply;
        sp<IAudioRecord> record;

        data.writeInt32((int32_t) input);      
        data.writeInt32(format);
      
        status_t lStatus = remote()->transact(OPEN_RECORD, data, &reply);
        if (lStatus != NO_ERROR) {
            ALOGE("openRecord error: %s", strerror(-lStatus));
        } else {
            
            size_t lNotificationFrames = (size_t) reply.readInt64();
  
            lStatus = reply.readInt32();
            record = interface_cast<IAudioRecord>(reply.readStrongBinder());
            cblk = interface_cast<IMemory>(reply.readStrongBinder());

            buffers = interface_cast<IMemory>(reply.readStrongBinder())
}

在Android系统中代码设计时,通常Bindler Server接收的实现和Client调用的实现在一个文件中,这里在IAudioFlinger中,BnAudioFlinger表示server端的接口,onTransact收到Bindler调用;

下面代码仅保留Bindler调用部分,这里因为BnAudioFlinger是AudioFlinger的父类,调用openRecord是AudioFlinger中成员,这样就一个调用执行到了音频服务AudioFlinger中。并且返回也序列化处理了,包含Bindler对象。

status_t BnAudioFlinger::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
        case OPEN_RECORD: {
            CHECK_INTERFACE(IAudioFlinger, data, reply);
            audio_io_handle_t input = (audio_io_handle_t) data.readInt32();
            
            sp<IAudioRecord> record = openRecord(input,
                    sampleRate, format, channelMask, opPackageName, &frameCount, &flags,
                    pid, tid, clientUid, &sessionId, &notificationFrames, cblk, buffers,
                    &status);
        
            reply->writeInt32(status);
            reply->writeStrongBinder(IInterface::asBinder(record));
            reply->writeStrongBinder(IInterface::asBinder(cblk));
            reply->writeStrongBinder(IInterface::asBinder(buffers));

 

 

AudioFlinger::RecordThread::createRecordTrack_l 使用recordthread创建track

track = new RecordTrack(this, client, sampleRate,

                      format, channelMask, frameCount, NULL, sessionId, uid,

                      *flags, TrackBase::TYPE_DEFAULT); 里面包含了RecordBufferConverter AudioRecordServerProxy ResamplerBufferProvider

mTracks.add(track); mTracks是一个vector

return一个 sp<RecordTrack>对象

AudioRecorder在Client端拿到了可以访问AF的Bindler对象IAudioRecord,赋值给mAudioRecord,这里还涉及一个共享内存的使用,非常重要;

到此Recorder的初始化就完成了。

API

startRecording 对应的过程如下

native_start 调用native的start

android_media_AudioRecord_start   jni函数start

sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz); 拿到AudioRecorder对象,之前将这个对象创建好后保存在了java层;

lpRecorder->start((AudioSystem::sync_event_t)event, (audio_session_t) triggerSession)  调用了AudioRecorder的start, triggerSession在java API传过来,默认为0

status_t AudioRecord::start

status = mAudioRecord->start(event, triggerSession); 调用前面从AF拿到的IAudioRecorder Bindler对象mAudioRecord,这个里面只有两个接口可调用, transact的code代表Bindler调用的type;

enum {
    UNUSED_WAS_GET_CBLK = IBinder::FIRST_CALL_TRANSACTION,
    START,
    STOP
};

status_t BnAudioRecord::onTransact中执行server端的start,RecordHandle继承自BnAudioRecord,因此在BnAudioRecorder中调用start是 AudioFlinger::RecordHandle::start成员函数;

在audioflinger/Tracks.cpp中实现 AudioFlinger::RecordHandle

mRecordTrack->start((AudioSystem::sync_event_t)event, triggerSession);  实调用际执行track的方法;RecordHandle应该是对RecordTrack的Bindler封装;

status_t AudioFlinger::RecordThread::RecordTrack::start   RecordTrack和RecordThread关联很密切;

track的定义在RecordTrack.h中,实现在Tracks.cpp, 都在audioflinger目录下;

recordThread->start(this, event, triggerSession); 对RecordThread调用start, 将当前track对象this传入了RecordThread, event使用SYNC_EVENT_NONE(java传入,通过AudioSystem映射)

AudioFlinger::RecordThread::start

        mActiveTracks.add(recordTrack); 加入active队列

        mActiveTracksGen++; active计数增加

        recordTrack->mResamplerBufferProvider->reset();  重置一些...

        // clear any converter state as new data will be discontinuous

        recordTrack->mRecordBufferConverter->reset();

        recordTrack->mState = TrackBase::STARTING_2;  starting状态

        // signal thread to start

        mWaitWorkCV.broadcast();  通知线程start

下面应该看如何使用这个mActiveTracks和里面的track的;

AudioFlinger::RecordThread::threadLoop()  AudioFlinger和AudioPolicyService启动阶段创建的Recorder线程执行体;

for (;;) { thread loop

size_t size = mActiveTracks.size(); 

mWaitWorkCV.wait(mLock);  active的track为0时,利用CV wait,当recorder start时将创建的track加入队列,CV信号通知这里开始工作;

            for (size_t i = 0; i < size; ) { 遍历每个recordtrack

                activeTrack = mActiveTracks[i];

                switch判断track状态, 在mActiveTracks中的也有可能处在未active状态: PAUSING  STARTING_1 STARTING_2 ACTIVE IDLE

               activeTracks.add(activeTrack); 将active状态的加入局部队列;

在record的threadloop中还有一个核心的操作,从hw中读取音频数据到缓存;

threadloop从hw中读取音频数据到缓存;mRsmpInBuffer是缓存buffer,mRsmpInRear 是buffer写数据偏移,


        // Read from HAL to keep up with fastest client if multiple active tracks, not slowest one.
        // Only the client(s) that are too slow will overrun. But if even the fastest client is too
        // slow, then this RecordThread will overrun by not calling HAL read often enough.
        // If destination is non-contiguous, first read past the nominal end of buffer, then
        // copy to the right place.  Permitted because mRsmpInBuffer was over-allocated.

        int32_t rear = mRsmpInRear & (mRsmpInFramesP2 - 1);
        ssize_t framesRead;

        // If an NBAIO source is present, use it to read the normal capture's data
        if (mPipeSource != 0) {
            size_t framesToRead = mBufferSize / mFrameSize;
            framesRead = mPipeSource->read((uint8_t*)mRsmpInBuffer + rear * mFrameSize,
                    framesToRead);
            if (framesRead == 0) {
                // since pipe is non-blocking, simulate blocking input
                sleepUs = (framesToRead * 1000000LL) / mSampleRate;
            }
        // otherwise use the HAL / AudioStreamIn directly
        } else {
            ATRACE_BEGIN("read");
            ssize_t bytesRead = mInput->stream->read(mInput->stream,
                    (uint8_t*)mRsmpInBuffer + rear * mFrameSize, mBufferSize);
            ATRACE_END();
            if (bytesRead < 0) {
                framesRead = bytesRead;
            } else {
                framesRead = bytesRead / mFrameSize;
            }
        }

 

record threadloop中对真正active的track局部队列遍历

        size = activeTracks.size();

        // loop over each active track

        for (size_t i = 0; i < size; i++) {

            activeTrack = activeTracks[i];

            针对每个track循环更新buffer

track更新buffer过程,

activeTrack->mResamplerBufferProvider->sync(&framesIn, &hasOverrun); 

 mResamplerBufferProvider这个是track的一个成员对象,track创建时创建了它,并且将track的this引用传入,因此mResamplerBufferProvider拥有了track的引用,track拥有RecordThread的应用,在sync中能拿到recordThread的引用;

thread中保存有音频buffer的状态,将这些状态更新到mResamplerBufferProvider

    const int32_t rear = recordThread->mRsmpInRear;

    const int32_t front = mRsmpInFront;

    const ssize_t filled = rear - front;

    // process frames from the RecordThread buffer provider to the RecordTrack buffer

   framesOut = activeTrack->mRecordBufferConverter->convert(activeTrack->mSink.raw, activeTrack->mResamplerBufferProvider, framesOut);

    // update frame information and push timestamp out

   activeTrack->updateTrackFrameInfo(activeTrack->mServerProxy->framesReleased(),

                    mTimestamp.mPosition[ExtendedTimestamp::LOCATION_SERVER],

                    mSampleRate, mTimestamp);

ResamplerBufferProvider和RecordBufferConverter的实现都在AudioFlinger.cpp中;这两类的对象都属于一个track,辅助track实现音频capture的过程;

RecorderBufferConverter在构建时,创建了一个mResampler = AudioResampler::create 用于重采样;

RecordBufferConverter::conver过程就是用mResampler重采样,并将输入拷贝到activeTrack->mSink.raw

API read对应的过程如下

 

native_read_in_byte_array jni实现

lpRecorder->read  调用AudioRecorder native read

ssize_t AudioRecord::read(void* buffer, size_t userSize, bool blocking)

 status_t err = obtainBuffer(&audioBuffer,

                blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking); 循环调用获取音频数到audioBuffer

obtainBuffer执行过程用到了在openRecord阶段创建的AudioRecordClientProxy,

mProxy = new AudioRecordClientProxy(cblk, buffers, mFrameCount, mFrameSize);

buffers = bufferMem->pointer();  buffers共享内存中缓冲区的起始地址。

bufferMem是AF openRecord过程生成的共享内存;

cblk指向iMem,是缓冲区控制块;

    sp<IMemory> iMem;           // for cblk

    sp<IMemory> bufferMem;

    sp<IAudioRecord> record = audioFlinger->openRecord(input,

                                                       mSampleRate,

                                                       mFormat,

                                                       mChannelMask,

                                                       opPackageName,

                                                       &temp,

                                                       &flags,

                                                       mClientPid,

                                                       tid,

                                                       mClientUid,

                                                       &mSessionId,

                                                       &notificationFrames,

                                                       iMem,

                                                       bufferMem,

                                                       &status);

class AudioRecordClientProxy : public ClientProxy   继承自ClientProxy, 在AudioTrackShared.h中定义,还有AudioTrack也有一样的实现;主要是为client和server之间跨进程内存共享数据传输;

obtainBuffer是父类ClientProxy的成员,obtainBuffer是计算共享内存读取的位置,将指针指向该点;

            buffer->mFrameCount = part1;

            buffer->mRaw = part1 > 0 ?

                    &((char *) mBuffers)[(mIsOut ? rear : front) * mFrameSize] : NULL;      mBuffers是创建Porxy时传入的共享内存对象;

obtainBuffer是一个阻塞式调用,通过for(;;)循环获取AF服务中产生的音频数据,在打断或拿到数据后返回;

通过上面提到的控制块中数据判断是否有可用数据,write to rear, read from front

        int32_t front;
        int32_t rear;
        if (mIsOut) {
            // The barrier following the read of mFront is probably redundant.
            // We're about to perform a conditional branch based on 'filled',
            // which will force the processor to observe the read of mFront
            // prior to allowing data writes starting at mRaw.
            // However, the processor may support speculative execution,
            // and be unable to undo speculative writes into shared memory.
            // The barrier will prevent such speculative execution.
            front = android_atomic_acquire_load(&cblk->u.mStreaming.mFront);
            rear = cblk->u.mStreaming.mRear;
        } else {
            // On the other hand, this barrier is required.
            rear = android_atomic_acquire_load(&cblk->u.mStreaming.mRear);
            front = cblk->u.mStreaming.mFront;
        }
        // write to rear, read from front
        ssize_t filled = rear - front;

 

AudioRecord::read调用obtainBuffer获得共享内存的读取地址后,拷贝数据到用户buffer,完成一次数据read,obtainBuffer过程只获取一帧数据,read可循环的调用obtainBuffer获取多帧数据到用户buffer;

size_t bytesRead = audioBuffer.size;

 memcpy(buffer, audioBuffer.i8, bytesRead);

AudioRecord在client端调用AudioFlinger server端时传入IMemory类型的两个共享内存对象;

iMem应用控制,bufferMem是共享内存buffer;

    sp<IMemory> iMem;           // for cblk
    sp<IMemory> bufferMem;
    sp<IAudioRecord> record = audioFlinger->openRecord(input,
                                                       mSampleRate,
                                                       mFormat,
                                                       mChannelMask,
                                                       opPackageName,
                                                       &temp,
                                                       &flags,
                                                       mClientPid,
                                                       tid,
                                                       mClientUid,
                                                       &mSessionId,
                                                       &notificationFrames,
                                                       iMem,
                                                       bufferMem,
                                                       &status);

sp<IAudioRecord> AudioFlinger::openRecord 共享内存实现;利用recorderTrack的方法获得;一个client调用一次openRecord对应一个track,因此每个client和server之间唯一一个共享内存用于音频传递;

    cblk = recordTrack->getCblk();  

    buffers = recordTrack->getBuffers();

recordTrack包装成Bindler对象IAudioRecord返回给client,实际每个client调用server都是通过对应的IAudioRecord;

recordHandle = new RecordHandle(recordTrack);

return recordHandle;   返回值是sp<IAudioRecord>, RecordHandle继承自BnAudioRecord;

AudioFlinger openRecord过程创建了RecordTrack,这里涉及两个队列:

mTracks和mActiveTracks

mTracks:刚创建的加入这个队列;

mActiveTracks:start后的加入这个队列;

RecordTrack继承自TrackBase,并且引用一个AudioFlinger::Client对象,clinet主要包含一块1024*1024的内存;client是在RecordTrack构建之前创建,构建时传给RecordTrack,代表了一个client在server中的实例;

TrackBase的构建过程创建了共享内存;

mCblkMemory和mBufferMemory对应控制块和共享内存,在下面调用中返回:

    cblk = recordTrack->getCblk();  

    buffers = recordTrack->getBuffers();

上面过程是读共享内存的过程,还有些共享内存的过程,在RecodThread的threadloop中;

status_t status = activeTrack->getNextBuffer(&activeTrack->mSink);  

status_t status = mServerProxy->obtainBuffer(&buf)  实际执行ServerProxy的obtainBuffer, 和ClientProxy的原理一样;

 
打赏
 本文转载自:网络 
所有权利归属于原作者,如文章来源标示错误或侵犯了您的权利请联系微信13520258486
更多>最近资讯中心
更多>最新资讯中心
0相关评论

推荐图文
推荐资讯中心
点击排行
最新信息
新手指南
采购商服务
供应商服务
交易安全
关注我们
手机网站:
新浪微博:
微信关注:

13520258486

周一至周五 9:00-18:00
(其他时间联系在线客服)

24小时在线客服