Android源码分析——AndroidRecord录音(一)

   日期:2021-03-01     浏览:124    评论:0    
核心提示:Android 源码版本:9.0java代码路径:frameworks/base/media/java/android/media/jni代码路径:frameworks/base/core/jni/C++代码路径:frameworks/av/media/libaudioclient/1.主要函数主要函数//静态方法AudioRecord.getMinBufferSize(sampleRate, channel, audioFormat)2.getMinBufferSizeAudioRe

基于Android 源码版本:9.0

java代码路径:frameworks/base/media/java/android/media/
jni代码路径:frameworks/base/core/jni/
C++代码路径:frameworks/av/media/libaudioclient/
audioflinger: frameworks/av/sevices/audioflinger/

由于涉及源码比较多,所以只贴出部分,中间由删除许多,有兴趣的可以自行看源码。

1.简介

使⽤AudioRecord的录音流程,分为以下⼏步

  1. 获取 创建AudioRecord 所需的buffer size 大小;
  2. 根据⾳频设备和AudioRecord参数,创建AudioRecord
  3. 调⽤AudioRecord.startRecording开始录音。
  4. 读取录制的音频数据AudioRecord.read(data, 0, bufferSize)。
  5. 停止录音,并释放;
    主要使用的API
//静态方法
AudioRecord.getMinBufferSize(sampleRate, channel, audioFormat)
//创建AudioRecord对象
new AudioRecord(MediaRecorder.AudioSource.MIC,
						sampleRate,
						channel,
						AudioFormat.ENCODING_PCM_16BIT,
						bufferSize
				);
audiorecord.startRecording()

2.getMinBufferSize

AudioRecord构造器中需要传递一个buffersize,该值需要通过AudioRecord的静态函数getMinBufferSize获取。我们先看下getMinBufferSize
getMinBufferSize:静态方法,用来获取AudioRecord对象所需的最小缓冲区大小(字节)。

//参数分别是 采样率、声道、音频格式
static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat) { 
    int channelCount = 0;
    switch (channelConfig) { 
   		//省略,这里单生道转换为1, 双声道转换为2;
    }
	//调用native方法。
    int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);
    if (size == 0) { 
        return ERROR_BAD_VALUE;
    }
    else if (size == -1) { 
        return ERROR;
    }
    else { 
        return size;
    }
}

接着看jni代码,android_media_AudioRecord.cpp

// 返回成功创建AudioRecord实例所需的最小大小。
//如果不支持参数组合,则返回0。
//如果查询buffer size出错,则返回-1。
static jint android_media_AudioRecord_get_min_buff_size(JNIEnv *env,  jobject thiz,
    jint sampleRateInHertz, jint channelCount, jint audioFormat) { 
    ALOGV(">> android_media_AudioRecord_get_min_buff_size(%d, %d, %d)",
          sampleRateInHertz, channelCount, audioFormat);
    size_t frameCount = 0;
    audio_format_t format = audioFormatToNative(audioFormat);//java 对象转C++对象
    //调用C++ AudioRecord类的getMinFrameCount方法
    status_t result = AudioRecord::getMinFrameCount(&frameCount,
            sampleRateInHertz,
            format,
            audio_channel_in_mask_from_count(channelCount));
    if (result == BAD_VALUE) { 
        return 0;
    }
    if (result != NO_ERROR) { 
        return -1;
    }
    return frameCount * channelCount * audio_bytes_per_sample(format);
}

C++ AudioRecord.cpp

status_t AudioRecord::getMinFrameCount(
        size_t* frameCount,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask)
{ 
    if (frameCount == NULL) { 
        return BAD_VALUE;
    }
    size_t size;
    status_t status = AudioSystem::getInputBufferSize(sampleRate, format, channelMask, &size);
    if (status != NO_ERROR) { 
        return status;
    }
    //检测size是否符合规则
    // We double the size of input buffer for ping pong use of record buffer.
    // Assumes audio_is_linear_pcm(format)
    if ((*frameCount = (size * 2) / (audio_channel_count_from_in_mask(channelMask) *
            audio_bytes_per_sample(format))) == 0) { 
        return BAD_VALUE;
    }
    return NO_ERROR;
}

AudioSystem 通过binder方式,调用AudioFlinger getInputBufferSize函数。
AudioSystem.cpp

status_t AudioSystem::getInputBufferSize(uint32_t sampleRate, audio_format_t format,
        audio_channel_mask_t channelMask, size_t* buffSize)
{ 
    const sp<AudioFlingerClient> afc = getAudioFlingerClient();
    if (afc == 0) { 
        return NO_INIT;
    }
    return afc->getInputBufferSize(sampleRate, format, channelMask, buffSize);
}

AudioFlinger 运行在audioserver系统进程中,与hal层进行交互。

3.初始化AudioRecord

流程:AudioRecord.java ->android.media.AudioRecord.cpp->AudioRecord.cpp->AudioSystem.cpp->IAudioFlinger.cpp->AudioFlinger.cpp

AudioRecord构建器

public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,
        int bufferSizeInBytes)
throws IllegalArgumentException { 
    this((new AudioAttributes.Builder())
                .setInternalCapturePreset(audioSource)
                .build(),
            (new AudioFormat.Builder())
                .setChannelMask(getChannelMaskFromLegacyConfig(channelConfig,
                                    true))
                .setEncoding(audioFormat)
                .setSampleRate(sampleRateInHz)
                .build(),
            bufferSizeInBytes,
            AudioManager.AUDIO_SESSION_ID_GENERATE);
}

创建AudioAttributes、AudioFormat对象,并调用AudioRecord另一个构建器。
AudioManager.AUDIO_SESSION_ID_GENERATE表示一个特殊的音频会话ID,用于指示未知的音频会话ID,并且framework应生成一个新值。

@SystemApi
public AudioRecord(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
        int sessionId) throws IllegalArgumentException { 
    mRecordingState = RECORDSTATE_STOPPED;
	//..省略部分代码, 检查attributes, format
	
	//sampleRate channel等转换
    。。。。。
	//检查buffer size
    audioBuffSizeCheck(bufferSizeInBytes);

    int[] sampleRate = new int[] { mSampleRate};
    int[] session = new int[1];
    session[0] = sessionId;
    //调用natvie方法,初始化设备
    //TODO: update native initialization when information about hardware init failure
    // due to capture device already open is available.
    int initResult = native_setup( new WeakReference<AudioRecord>(this),
            mAudioAttributes, sampleRate, mChannelMask, mChannelIndexMask,
            mAudioFormat, mNativeBufferSizeInBytes,
            session, getCurrentOpPackageName(), 0 );
    if (initResult != SUCCESS) { 
        loge("Error code "+initResult+" when initializing native AudioRecord object.");
        return; // with mState == STATE_UNINITIALIZED
    }
    //初始化成功,获取底层返回的 samplerate,及sessionid。
    mSampleRate = sampleRate[0];
    mSessionId = session[0];
    mState = STATE_INITIALIZED;
}

native_setup 对应jni中的android_media_AudioRecord_setup函数,

android_media_AudioRecord_setup

主要包含:

  • 一些参数的转换、检查;AudioAttributes sampleRate channelIndexMask fromat
  • 创建C++ 层AudioRecord对象
  • lpRecorder->set 设置参数
  • 更新sessionid, sampleRate到java层
  • 将C++ 层对象保存在 java 某些字段中
static jint
android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this,
        jobject jaa, jintArray jSampleRate, jint channelMask, jint channelIndexMask,
        jint audioFormat, jint buffSizeInBytes, jintArray jSession, jstring opPackageName,
        jlong nativeRecordInJavaObj)
{ 
   	//.........
	//获取channel session
    audio_attributes_t *paa = NULL;
    sp<AudioRecord> lpRecorder = 0;
    audiorecord_callback_cookie *lpCallbackData = NULL;
    
    // 判断是否需要创建C++ 层AudioRecord
    if (nativeRecordInJavaObj == 0) { 
    	//.........
		//检查 AudioAttributes sampleRate channelIndexMask fromat等
        
        size_t bytesPerSample = audio_bytes_per_sample(format);

        if (buffSizeInBytes == 0) { 
             ALOGE("Error creating AudioRecord: frameCount is 0.");
            return (jint) AUDIORECORD_ERROR_SETUP_ZEROFRAMECOUNT;
        }
        size_t frameSize = channelCount * bytesPerSample;
        size_t frameCount = buffSizeInBytes / frameSize;

        // 创建C++ 层AudioRecord对象;
        lpRecorder = new AudioRecord(String16(opPackageNameStr.c_str()));

        // create the callback information:
        // this data will be passed with every AudioRecord callback
        lpCallbackData = new audiorecord_callback_cookie;
        lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz);
        // 我们使用弱引用,以便可以对AudioRecord对象进行垃圾回收。
        lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this);
        lpCallbackData->busy = false;
		//AudioRecord设置参数
        const status_t status = lpRecorder->set(paa->source,
            sampleRateInHertz,
            format,        // word length, PCM
            localChanMask,
            frameCount,
            recorderCallback,// callback_t
            lpCallbackData,// void* user
            0,             // notificationFrames,
            true,          // threadCanCallJava
            sessionId,
            AudioRecord::TRANSFER_DEFAULT,
            flags,
            -1, -1,        // default uid, pid
            paa);

        if (status != NO_ERROR) { 
            ALOGE("Error creating AudioRecord instance: initialization check failed with status %d.",
                    status);
            goto native_init_failure;
        }
    } else {  // end if nativeRecordInJavaObj == 0)
        lpRecorder = (AudioRecord*)nativeRecordInJavaObj;
        lpCallbackData = new audiorecord_callback_cookie;
        lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz);
        // we use a weak reference so the AudioRecord object can be garbage collected.
        lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this);
        lpCallbackData->busy = false;
    }
	//设置sessionid, sampleRate到java层
    nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
    if (nSession == NULL) { 
        ALOGE("Error creating AudioRecord: Error retrieving session id pointer");
        goto native_init_failure;
    }
    // read the audio session ID back from AudioRecord in case a new session was created during set()
    nSession[0] = lpRecorder->getSessionId();
    env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
    nSession = NULL;

    { 
        const jint elements[1] = {  (jint) lpRecorder->getSampleRate() };
        env->SetIntArrayRegion(jSampleRate, 0, 1, elements);
    }

    {    // scope for the lock
        Mutex::Autolock l(sLock);
        sAudioRecordCallBackCookies.add(lpCallbackData);
    }
    // 将新创建的C ++ AudioRecord保存在Java对象的“ nativeRecorderInJavaObj”字段中
    setAudioRecord(env, thiz, lpRecorder);
    // 将新创建的回调信息保存在Java对象的中(在mNativeCallbackCookie中),
    // 以便我们可以调用finalize()释放native中的内存
    env->SetLongField(thiz, javaAudioRecordFields.nativeCallbackCookie, (jlong)lpCallbackData);
	//返回状态
    return (jint) AUDIO_JAVA_SUCCESS;
    // failure:
native_init_failure:
    /.....
    // lpRecorder goes out of scope, so reference count drops to zero
    return (jint) AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;

AudioRecord.cpp

AudioRecord的构造器比较简单,只进行了一些变量默认值初始化。

AudioRecord::AudioRecord(const String16 &opPackageName)
    : mActive(false), mStatus(NO_INIT), mOpPackageName(opPackageName),
      mSessionId(AUDIO_SESSION_ALLOCATE),
      mPreviousPriority(ANDROID_PRIORITY_NORMAL), mPreviousSchedulingGroup(SP_DEFAULT),
      mSelectedDeviceId(AUDIO_PORT_HANDLE_NONE), mRoutedDeviceId(AUDIO_PORT_HANDLE_NONE)
{ }

看一下AudioRecord set函数

status_t AudioRecord::set(
        audio_source_t inputSource,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        callback_t cbf,
        void* user,
        uint32_t notificationFrames,
        bool threadCanCallJava,
        audio_session_t sessionId,
        transfer_type transferType,
        audio_input_flags_t flags,
        uid_t uid,
        pid_t pid,
        const audio_attributes_t* pAttributes,
        audio_port_handle_t selectedDeviceId)
{ 
    status_t status = NO_ERROR;
    
    // invariant that mAudioRecord != 0 is true only after set() returns successfully
    //检查是否已经set过了
    if (mAudioRecord != 0) { 
        ALOGE("Track already in use");
        status = INVALID_OPERATION;
        goto exit;
    }
	//....
    mOrigFlags = mFlags = flags;
    mCbf = cbf;
    if (cbf != NULL) { 
    	//创建线程
        mAudioRecordThread = new AudioRecordThread(*this, threadCanCallJava);
        mAudioRecordThread->run("AudioRecord", ANDROID_PRIORITY_AUDIO);
        // thread begins in paused state, and will not reference us until start()
    }

    // 创建 IAudioRecord
    status = createRecord_l(0 , mOpPackageName);

    if (status != NO_ERROR) { 
        if (mAudioRecordThread != 0) { 
            mAudioRecordThread->requestExit();   // see comment in AudioRecord.h
            mAudioRecordThread->requestExitAndWait();
            mAudioRecordThread.clear();
        }
        goto exit;
    }
   //申请audio sessionid
    AudioSystem::acquireAudioSessionId(mSessionId, -1);

    return status;
}
  • 当cbf 回调 不为null时,开启一个录音线程AudioRecordThread;
  • 调用createRecord_l(0)创建IAudioRecord对象;
  • 如果建立失败,就销毁录音线程AudioRecordThread;
  • 建立成功,申请audio sessionId。

接着看一下createRecord_l函数
const sp& audioFlinger = AudioSystem::get_audio_flinger()

  • 获取IAudioFlinger对象,通过binder进行进程间通信,调用到AudioFlinger中;
  • audioFlinger->createRecord 创建IAudioRecord对象,对应AudioFlinger中RecordHandle;
  • 设置IMemory共享内存;
  • 更新AudioRecordClientProxy;
status_t AudioRecord::createRecord_l(const Modulo<uint32_t> &epoch, const String16& opPackageName)
{ 
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
    IAudioFlinger::CreateRecordInput input;
    IAudioFlinger::CreateRecordOutput output;
    audio_session_t originalSessionId;
    sp<media::IAudioRecord> record;
    void *iMemPointer;
    audio_track_cblk_t* cblk;
    status_t status;

    if (audioFlinger == 0) { 
        ALOGE("Could not get audioflinger");
        status = NO_INIT;
        goto exit;
    }
	//.........

	//跨进程,创建record
    record = audioFlinger->createRecord(input,
                                                              output,
                                                              &status);
    if (status != NO_ERROR) { 
        ALOGE("AudioFlinger could not create record track, status: %d", status);
        goto exit;
    }
    ALOG_ASSERT(record != 0);
    // 现在,AudioFlinger拥有对I / O句柄的引用,因此我们不再负责释放它。

	if (output.cblk == 0) { 
        ALOGE("Could not get control block");
        status = NO_INIT;
        goto exit;
    }
    iMemPointer = output.cblk ->pointer();
    if (iMemPointer == NULL) { 
        ALOGE("Could not get control block pointer");
        status = NO_INIT;
        goto exit;
    }
    cblk = static_cast<audio_track_cblk_t*>(iMemPointer);

    // 共享内存中缓冲区的起始地址。
    //缓冲区要么紧接在控制块之后,要么由服务决定是否位于单独的区域中。
    void *buffers;
    if (output.buffers == 0) { 
        buffers = cblk + 1;
    } else { 
        buffers = output.buffers->pointer();
        if (buffers == NULL) { 
            ALOGE("Could not get buffer pointer");
            status = NO_INIT;
            goto exit;
        }
    }
	//..................
    // 仅当set()成功返回后,mAudioRecord!= 0才为真
    if (mAudioRecord != 0) { 
        IInterface::asBinder(mAudioRecord)->unlinkToDeath(mDeathNotifier, this);
        mDeathNotifier.clear();
    }
    //赋值AudioRecord
    mAudioRecord = record;
    mCblkMemory = output.cblk;
    mBufferMemory = output.buffers;
    IPCThreadState::self()->flushCommands();

    // 我们保留I/O句柄的副本,但不拥有引用
    mInput = output.inputId;
    mRefreshRemaining = true;

    mFrameCount = output.frameCount;
    // If IAudioRecord is re-created, don't let the requested frameCount
    // decrease. This can confuse clients that cache frameCount().
    if (mFrameCount > mReqFrameCount) { 
        mReqFrameCount = mFrameCount;
    }

    // update proxy
    mProxy = new AudioRecordClientProxy(cblk, buffers, mFrameCount, mFrameSize);
    mProxy->setEpoch(epoch);
    mProxy->setMinimum(mNotificationFramesAct);

    mDeathNotifier = new DeathNotifier(this);
    IInterface::asBinder(mAudioRecord)->linkToDeath(mDeathNotifier, this);

exit:
    mStatus = status;
    // sp<IAudioTrack> track destructor will cause releaseOutput() to be called by AudioFlinger
    return status;
}

AudioFlinger.cpp

接着看AudioFlinger

  • getInputForAttr 获取输入流;
  • checkRecordThread_l 获取input对应的RecordThread,这个RecordThread是由getInputForAttr创建;
  • createRecordTrack_l创建RecordTrack对象,管理音频数据;
  • 创建RecordHandle对象并返回;
sp<media::IAudioRecord> AudioFlinger::createRecord(const CreateRecordInput& input,
                                                   CreateRecordOutput& output,
                                                   status_t *status)
{ 
    // 不是常规循环,而是总共最多进行两次迭代的重试循环。
     //首先尝试使用FAST标志,如果失败,再尝试使用FAST标志。
     //通过break退出循环,如果没有错误,则退出,如果错误,
     //重新进入范围时,将删除sp <>引用。
     //缩进的缺乏是有意为之,以减少代码混乱并简化合并。
    for (;;) { 
    // 如果重试,则释放先前打开的输入。
    if (output.inputId != AUDIO_IO_HANDLE_NONE) { 
        recordTrack.clear();
        AudioSystem::releaseInput(portId);
        output.inputId = AUDIO_IO_HANDLE_NONE;
        output.selectedDeviceId = input.selectedDeviceId;
        portId = AUDIO_PORT_HANDLE_NONE;
    }
    lStatus = AudioSystem::getInputForAttr(&input.attr, &output.inputId,
                                      sessionId,
                                    // FIXME compare to AudioTrack
                                      clientPid,
                                      clientUid,
                                      input.opPackageName,
                                      &input.config,
                                      output.flags, &output.selectedDeviceId, &portId);

    { 
        Mutex::Autolock _l(mLock);
        RecordThread *thread = checkRecordThread_l(output.inputId);
        if (thread == NULL) { 
            ALOGE("createRecord() checkRecordThread_l failed");
            lStatus = BAD_VALUE;
            goto Exit;
        }
        output.sampleRate = input.config.sample_rate;
        output.frameCount = input.frameCount;
        output.notificationFrameCount = input.notificationFrameCount;
        recordTrack = thread->createRecordTrack_l(client, input.attr, &output.sampleRate,
                                                  input.config.format, input.config.channel_mask,
                                                  &output.frameCount, sessionId,
                                                  &output.notificationFrameCount,
                                                  clientUid, &output.flags,
                                                  input.clientInfo.clientTid,
                                                  &lStatus, portId);
        LOG_ALWAYS_FATAL_IF((lStatus == NO_ERROR) && (recordTrack == 0));
        // lStatus == BAD_TYPE means FAST flag was rejected: request a new input from
        // audio policy manager without FAST constraint
        if (lStatus == BAD_TYPE) { 
            continue;
        }
        if (lStatus != NO_ERROR) { 
            goto Exit;
        }

        // Check if one effect chain was awaiting for an AudioRecord to be created on this
        // session and move it to this thread.
        sp<EffectChain> chain = getOrphanEffectChain_l(sessionId);
        if (chain != 0) { 
            Mutex::Autolock _l(thread->mLock);
            thread->addEffectChain_l(chain);
        }
        break;
    }
    // End of retry loop.
    // The lack of indentation is deliberate, to reduce code churn and ease merges.
    }

    output.cblk = recordTrack->getCblk();
    output.buffers = recordTrack->getBuffers();

    // return handle to client
    recordHandle = new RecordHandle(recordTrack);
    
    *status = lStatus;
    return recordHandle;
}

注意这里返回RecordHandle对象,并赋值spmedia::IAudioRecord。后续AudioRecord.cpp 中的mAudioRecord直接和RecordHandle通信。

getInputForAttr

通过Binder ,调用AudioPlicyService getInputForAttr函数.

status_t AudioSystem::getInputForAttr(const audio_attributes_t *attr,
                                audio_io_handle_t *input,
                                audio_session_t session,
                                pid_t pid,
                                uid_t uid,
                                const String16& opPackageName,
                                const audio_config_base_t *config,
                                audio_input_flags_t flags,
                                audio_port_handle_t *selectedDeviceId,
                                audio_port_handle_t *portId)
{ 
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return NO_INIT;
    return aps->getInputForAttr(
            attr, input, session, pid, uid, opPackageName,
            config, flags, selectedDeviceId, portId);
}

frameworks\av\services\audiopolicy\service/AudioPolicyInterfaceImpl.cpp
AudioPolicyService::getInputForAttr -》AudioPolicyManager->getInputForAttr

AudioPolicyManager getInputForAttr

  • getDeviceAndMixForInputSource函数获取audio_device_t 设备,和policymix;
  • getInputForDevice 获取设备的input流;
status_t AudioPolicyManager::getInputForAttr(const audio_attributes_t *attr,
                                             audio_io_handle_t *input,
                                             audio_session_t session,
                                             uid_t uid,
                                             const audio_config_base_t *config,
                                             audio_input_flags_t flags,
                                             audio_port_handle_t *selectedDeviceId,
                                             input_type_t *inputType,
                                             audio_port_handle_t *portId)
{ 
	//.....
	if (inputSource == AUDIO_SOURCE_REMOTE_SUBMIX &&
            strncmp(attr->tags, "addr=", strlen("addr=")) == 0) { 
        status = mPolicyMixes.getInputMixForAttr(*attr, &policyMix);
        if (status != NO_ERROR) { 
            goto error;
        }
        *inputType = API_INPUT_MIX_EXT_POLICY_REROUTE;
        device = AUDIO_DEVICE_IN_REMOTE_SUBMIX;
        address = String8(attr->tags + strlen("addr="));
    } else { 
        device = getDeviceAndMixForInputSource(inputSource, &policyMix);
       	//.....
    }
    *input = getInputForDevice(device, address, session, uid, inputSource,
                               config, flags,
                               policyMix);
    if (*input == AUDIO_IO_HANDLE_NONE) { 
        status = INVALID_OPERATION;
        goto error;
    }
    inputDevices = mAvailableInputDevices.getDevicesFromType(device);
    *selectedDeviceId = inputDevices.size() > 0 ? inputDevices.itemAt(0)->getId()
            : AUDIO_PORT_HANDLE_NONE;

    return NO_ERROR;
}

接着看getInputForDevice

audio_io_handle_t AudioPolicyManager::getInputForDevice(audio_devices_t device,
                                                        String8 address,
                                                        audio_session_t session,
                                                        uid_t uid,
                                                        audio_source_t inputSource,
                                                        const audio_config_base_t *config,
                                                        audio_input_flags_t flags,
                                                        const sp<AudioPolicyMix> &policyMix)
{ 
	//。。。。。
    sp<AudioSession> audioSession = new AudioSession(session,
                                                     inputSource,
                                                     config->format,
                                                     samplingRate,
                                                     config->channel_mask,
                                                     flags,
                                                     uid,
                                                     isSoundTrigger,
                                                     policyMix, mpClientInterface);
                                                     
    sp<AudioInputDescriptor> inputDesc = new AudioInputDescriptor(profile, mpClientInterface);

    audio_config_t lConfig = AUDIO_CONFIG_INITIALIZER;
    lConfig.sample_rate = profileSamplingRate;
    lConfig.channel_mask = profileChannelMask;
    lConfig.format = profileFormat;

    if (address == "") { 
        DeviceVector inputDevices = mAvailableInputDevices.getDevicesFromType(device);
        // the inputs vector must be of size >= 1, but we don't want to crash here
        address = inputDevices.size() > 0 ? inputDevices.itemAt(0)->mAddress : String8("");
    }

    status_t status = inputDesc->open(&lConfig, device, address,
            halInputSource, profileFlags, &input);

    // only accept input with the exact requested set of parameters
    if (status != NO_ERROR || input == AUDIO_IO_HANDLE_NONE ||
        (profileSamplingRate != lConfig.sample_rate) ||
        !audio_formats_match(profileFormat, lConfig.format) ||
        (profileChannelMask != lConfig.channel_mask)) { 
        ALOGW("getInputForAttr() failed opening input: sampling rate %d"
              ", format %#x, channel mask %#x",
              profileSamplingRate, profileFormat, profileChannelMask);
        if (input != AUDIO_IO_HANDLE_NONE) { 
            inputDesc->close();
        }
        return AUDIO_IO_HANDLE_NONE;
    }

    inputDesc->mPolicyMix = policyMix;
    inputDesc->addAudioSession(session, audioSession);

    addInput(input, inputDesc);
    mpClientInterface->onAudioPortListUpdate();

    return input;
}

inputDesc->open对应AudioInputDescriptor open函数

status_t AudioInputDescriptor::open(const audio_config_t *config,
                                       audio_devices_t device,
                                       const String8& address,
                                       audio_source_t source,
                                       audio_input_flags_t flags,
                                       audio_io_handle_t *input)
{ 
    audio_config_t lConfig;
    if (config == nullptr) { 
        lConfig = AUDIO_CONFIG_INITIALIZER;
        lConfig.sample_rate = mSamplingRate;
        lConfig.channel_mask = mChannelMask;
        lConfig.format = mFormat;
    } else { 
        lConfig = *config;
    }
    mDevice = device;

    status_t status = mClientInterface->openInput(mProfile->getModuleHandle(),
                                                  input,
                                                  &lConfig,
                                                  &mDevice,
                                                  address,
                                                  source,
                                                  flags);

    if (status == NO_ERROR) { 
        mSamplingRate = lConfig.sample_rate;
        mChannelMask = lConfig.channel_mask;
        mFormat = lConfig.format;
        mId = AudioPort::getNextUniqueId();
        mIoHandle = *input;
        mProfile->curOpenCount++;
    }

    return status;
}

mClientInterface->openInput
frameworks\av\services\audiopolicy\service/AudioPolicyClientImpl.cpp

status_t AudioPolicyService::AudioPolicyClient::openInput(audio_module_handle_t module,
                                                          audio_io_handle_t *input,
                                                          audio_config_t *config,
                                                          audio_devices_t *device,
                                                          const String8& address,
                                                          audio_source_t source,
                                                          audio_input_flags_t flags)
{ 
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) { 
        ALOGW("%s: could not get AudioFlinger", __func__);
        return PERMISSION_DENIED;
    }

    return af->openInput(module, input, config, device, address, source, flags);
}

调用AudioFlinger的openInput函数

  • findSuitableHwDev_l获取hw模块;
  • inHwHal->openInputStream hal层 打开输入流;
  • 创建AudioStreamIn对象;
  • 创建RecordThread线程,并保存;
    hal层代码暂不分析。
status_t AudioFlinger::openInput(audio_module_handle_t module,
                                          audio_io_handle_t *input,
                                          audio_config_t *config,
                                          audio_devices_t *devices,
                                          const String8& address,
                                          audio_source_t source,
                                          audio_input_flags_t flags)
{ 
    Mutex::Autolock _l(mLock);
    if (*devices == AUDIO_DEVICE_NONE) { 
        return BAD_VALUE;
    }
    sp<ThreadBase> thread = openInput_l(module, input, config, *devices, address, source, flags);

    if (thread != 0) { 
        // notify client processes of the new input creation
        thread->ioConfigChanged(AUDIO_INPUT_OPENED);
        return NO_ERROR;
    }
    return NO_INIT;
}

sp<AudioFlinger::ThreadBase> AudioFlinger::openInput_l(audio_module_handle_t module,
                                                         audio_io_handle_t *input,
                                                         audio_config_t *config,
                                                         audio_devices_t devices,
                                                         const String8& address,
                                                         audio_source_t source,
                                                         audio_input_flags_t flags)
{ 
    AudioHwDevice *inHwDev = findSuitableHwDev_l(module, devices);
    if (inHwDev == NULL) { 
        *input = AUDIO_IO_HANDLE_NONE;
        return 0;
    }

    // Audio Policy can request a specific handle for hardware hotword.
    // The goal here is not to re-open an already opened input.
    // It is to use a pre-assigned I/O handle.
    if (*input == AUDIO_IO_HANDLE_NONE) { 
        *input = nextUniqueId(AUDIO_UNIQUE_ID_USE_INPUT);
    } else if (audio_unique_id_get_use(*input) != AUDIO_UNIQUE_ID_USE_INPUT) { 
        ALOGE("openInput_l() requested input handle %d is invalid", *input);
        return 0;
    } else if (mRecordThreads.indexOfKey(*input) >= 0) { 
        // This should not happen in a transient state with current design.
        ALOGE("openInput_l() requested input handle %d is already assigned", *input);
        return 0;
    }

    audio_config_t halconfig = *config;
    sp<DeviceHalInterface> inHwHal = inHwDev->hwDevice();
    sp<StreamInHalInterface> inStream;
    status_t status = inHwHal->openInputStream(
            *input, devices, &halconfig, flags, address.string(), source, &inStream);

    // If the input could not be opened with the requested parameters and we can handle the
    // conversion internally, try to open again with the proposed parameters.
    if (status == BAD_VALUE &&
        audio_is_linear_pcm(config->format) &&
        audio_is_linear_pcm(halconfig.format) &&
        (halconfig.sample_rate <= AUDIO_RESAMPLER_DOWN_RATIO_MAX * config->sample_rate) &&
        (audio_channel_count_from_in_mask(halconfig.channel_mask) <= FCC_8) &&
        (audio_channel_count_from_in_mask(config->channel_mask) <= FCC_8)) { 
        // FIXME describe the change proposed by HAL (save old values so we can log them here)
        ALOGV("openInput_l() reopening with proposed sampling rate and channel mask");
        inStream.clear();
        status = inHwHal->openInputStream(
                *input, devices, &halconfig, flags, address.string(), source, &inStream);
        // FIXME log this new status; HAL should not propose any further changes
    }

    if (status == NO_ERROR && inStream != 0) { 
        AudioStreamIn *inputStream = new AudioStreamIn(inHwDev, inStream, flags);
        if ((flags & AUDIO_INPUT_FLAG_MMAP_NOIRQ) != 0) { 
            sp<MmapCaptureThread> thread =
                    new MmapCaptureThread(this, *input,
                                          inHwDev, inputStream,
                                          primaryOutputDevice_l(), devices, mSystemReady);
            mMmapThreads.add(*input, thread);
            ALOGV("openInput_l() created mmap capture thread: ID %d thread %p", *input,
                    thread.get());
            return thread;
        } else { 

            // 创建录音线程RecordThread需要输入和输出设备指示,以转发到音频预处理模块
            sp<RecordThread> thread = new RecordThread(this,
                                      inputStream,
                                      *input,
                                      primaryOutputDevice_l(),
                                      devices,
                                      mSystemReady
                                      );
            mRecordThreads.add(*input, thread);
            ALOGV("openInput_l() created record thread: ID %d thread %p", *input, thread.get());
            return thread;
        }
    }

    *input = AUDIO_IO_HANDLE_NONE;
    return 0;
}

4.startRecording

开启录音

public void startRecording() throws IllegalStateException { 
    if (mState != STATE_INITIALIZED) { 
        throw new IllegalStateException("startRecording() called on an "
                + "uninitialized AudioRecord.");
    }

    // start recording
    synchronized(mRecordingStateLock) { 
        if (native_start(MediaSyncEvent.SYNC_EVENT_NONE, 0) == SUCCESS) { 
            handleFullVolumeRec(true);
            mRecordingState = RECORDSTATE_RECORDING;
        }
    }
}

接着native_start看jni。

static jint
android_media_AudioRecord_start(JNIEnv *env, jobject thiz, jint event, jint triggerSession)
{ 
	//获取之前保存的C++ AudioRecord指针
    sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz);
    if (lpRecorder == NULL ) { 
        jniThrowException(env, "java/lang/IllegalStateException", NULL);
        return (jint) AUDIO_JAVA_ERROR;
    }

    return nativeToJavaStatus(
            lpRecorder->start((AudioSystem::sync_event_t)event, (audio_session_t) triggerSession));
}

调用C++ AudioRecord start方法,并将status 转换为java状态返回。
接着看AudioRecord start

status_t AudioRecord::start(AudioSystem::sync_event_t event, audio_session_t triggerSession)
{ 
    AutoMutex lock(mLock);

    status_t status = NO_ERROR;
    if (!(flags & CBLK_INVALID)) { 
        status = mAudioRecord->start(event, triggerSession).transactionError();
        if (status == DEAD_OBJECT) { 
            flags |= CBLK_INVALID;
        }
    }
    if (flags & CBLK_INVALID) { 
        status = restoreRecord_l("start");
    }
    return status;
}

mAudioRecord.start -.>RecordHandle::start->spRecordThread::RecordTrack start

RecordHandle和RecordThread::RecordTrack 代码对应
frameworks/av/sevices/audioflinger/Tracks.cpp

status_t AudioFlinger::RecordThread::RecordTrack::start(AudioSystem::sync_event_t event,
                                                        audio_session_t triggerSession)
{ 
    sp<ThreadBase> thread = mThread.promote();
    if (thread != 0) { 
        RecordThread *recordThread = (RecordThread *)thread.get();
        return recordThread->start(this, event, triggerSession);
    } else { 
        return BAD_VALUE;
    }
}

RecordThread
frameworks/av/sevices/audioflinger/Threads.cpp

status_t AudioFlinger::RecordThread::start(RecordThread::RecordTrack* recordTrack,
                                           AudioSystem::sync_event_t event,
                                           audio_session_t triggerSession)
{ 
    //....
    { 
        status = AudioSystem::startInput(recordTrack->portId(), &silenced);
    }
    return status;
}

接着看AudioSystem startInput

status_t AudioSystem::startInput(audio_port_handle_t portId, bool *silenced)
{ 
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return PERMISSION_DENIED;
    return aps->startInput(portId, silenced);
}

AudioPolicyIntefaceImpl.cpp
frameworks/av/sevices/audiopolicy/service/
AudioPolicyService::startInput -》mAudioPolicyManager->startInput

status_t AudioPolicyManager::startInput(audio_io_handle_t input,
                                        audio_session_t session,
                                        bool silenced,
                                        concurrency_type__mask_t *concurrency)
{ 
	//获取输入设备
    sp<AudioInputDescriptor> inputDesc = mInputs.valueAt(index);

    // 确保我们从正确的静音状态开始
    audioSession->setSilenced(silenced);
    // 在调用下面的getNewInputDevice()之前增加活动计数,因为只考虑活动会话进行设备选择
    audioSession->changeActiveCount(1);

    // Routing?
    mInputRoutes.incRouteActivity(session);
    if (audioSession->activeCount() == 1 || mInputRoutes.getAndClearRouteChanged(session)) { 
        // 如果从主硬件模块上的麦克风开始捕获,则指示对声音触发服务进行了有效捕获
        audio_devices_t device = getNewInputDevice(inputDesc);
        setInputDevice(input, device, true );
		//计数加1
        status_t status = inputDesc->start();
        if (status != NO_ERROR) { 
            mInputRoutes.decRouteActivity(session);
            audioSession->changeActiveCount(-1);
            return status;
        }
        //....
    }

    return NO_ERROR;
}
  • setInputDevice 设置input 设备
  • RecordThread线程 start

startRecording函数中,建立起了录音通道路由route,并且开启了录音线程,并把录音数据从驱动中读取到AudioBuffer环形缓冲区来。此时录音设备节点已经被open了,并开始read数据了

AudioRecord.read 及stop 后续继续更新;

参考:https://www.cnblogs.com/pngcui/p/10016563.html

欢迎大家关注、评论、点赞、打赏。
你们的支持是我坚持的动力。Thank you!

 
打赏
 本文转载自:网络 
所有权利归属于原作者,如文章来源标示错误或侵犯了您的权利请联系微信13520258486
更多>最近资讯中心
更多>最新资讯中心
更多>相关资讯中心
0相关评论

推荐图文
推荐资讯中心
点击排行
最新信息
新手指南
采购商服务
供应商服务
交易安全
关注我们
手机网站:
新浪微博:
微信关注:

13520258486

周一至周五 9:00-18:00
(其他时间联系在线客服)

24小时在线客服