Skip to main content

基于GPT+Langchain Agent实现的智能语音管家

SmartDeng...About 13 minNLPLLMNLPSpeechAIAgentLLMHome Assistant

基于GPT+Langchain Agent实现的智能语音管家

我用GPT+Langchain Agent制作了一个智能管家,结合微软的语音服务,实现了语音唤醒,语音对话,语音控制智能家居的功能。本文来详细介绍下实现原理,代码已经放在Github上开源,供学习交流。

GitHub链接:mawwalker/mossopen in new window, 如果觉得有帮助,欢迎给个star。

前言

LLM(Large Language Model, 大语言模型)的兴起,让自然语言任务处理变得更简单。在这之后,开源社区出现了非常多优秀的基于LLM的产品应用。但由于LLM在逻辑推理、模型幻觉方面依然存在不少问题,而LLM本身是基于概率的生成式模型,可以预见的是,在未来较长一段时间,都无法完全克服这些问题。 于是就有了GPT的插件、外挂知识库、乃至后来的Agent。这些都是利用外部工具,弥补LLM在某些方面的不足,也可以说,是利用了LLM强大的语义理解和生成能力,去强化现有的工具。

本文的主要思路,是基于Langchain的Agent,定制了几个工具函数,供LLM调用。语音方面使用微软的关键词唤醒、语音识别(ASR)、Edge-TTS作为TTS服务。

注意:本文的程序,涉及的代码,都仅供学习参考,尤其涉及到智能家居的控制部分,请千万确保对应设备没有安全问题,本文及所涉及代码概不对任何结果负责。

本文的内容,主要讲述实现过程、核心代码说明。 如果是想要查看怎么使用本项目,可直接到GitHub仓库中,查看Readme即可。

准备

账号/API准备

  • 一个 Microft Azure 账号
  • 生成一个Azure Keyword Recognition 模型: https://learn.microsoft.com/en-us/azure/ai-services/speech-service/custom-keyword-basics?pivots=programming-language-python
  • 开通Azure Speech Service: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/
  • 开通Azure OpenAI Service 或者其他Langchain支持的LLM模型: https://azure.microsoft.com/en-us/products/cognitive-services/openai-service
  • Openweather API: https://openweathermap.org/api
  • Google Custom Json Search API: https://developers.google.com/custom-search/v1/overview
  • Home Assistant: https://www.home-assistant.io/

Python Requirements

>=python3.10

azure-cognitiveservices-speech==1.36.0
edge-tts==6.1.10
langchain==0.1.14
langchain_community==0.0.31
langchain_core==0.1.40
langchainhub==0.1.15
langchain_openai==0.1.1
loguru==0.7.2
google-api-python-client==2.125.0
pyaudio==0.2.14
pydub==0.25.1
pyowm==3.3.0
pytz==2024.1
PyYAML==6.0.1
Requests==2.31.0

1. Tools构建

Tool是Langchain中发挥LLM能力的关键部分,一个好的Tool,应该设计尽可能简单的入参,将最简单的部分交给LLM,一个Tool完成一个特定的任务,并且写好函数说明,告诉LLM在什么时候调用,以及入参格式、如果需要,可以说明返回的格式;

Langchain中有许多已经封装好的Tool,可以通过load_tools直接加载,具体的tool列表,可以参考这个文档:Tools | Langchainopen in new window

下面主要讲本文构建自定义tools的方式。

在Langchain中,可以有多种方式实现一个Tool:

  • 可以基于已有的一个函数,使用StructuredTool类,初始化成一个Agent中的tool;
  • 也可以基于BaseTool,定义一个Tool类,实现关键的部分即可。

在我的实现中,因为之前已有一些控制智能家居的脚本函数,所以就主要基于StructuredTool类,来封装一个普通函数了,根据需要可以自己调整,这里是Langchain关于自定义Tool的文档链接:

这里我选择自己写的控制空调的自定义函数来说明。 导入需要用到的包跟配置

import json
import requests
from loguru import logger
from langchain.pydantic_v1 import BaseModel
from langchain.agents import tool, Tool, initialize_agent, load_tools
from langchain.tools import BaseTool, StructuredTool, tool
from langchain_community.utilities import OpenWeatherMapAPIWrapper
from config.conf import config

hass_url = config['agent_config']['hass']['host']
hass_port = config['agent_config']['hass']['port']
hass_headers = {'Authorization': config['agent_config']['hass']['key'], 'content-type': 'application/json'}

1.1. Tool输入格式

首先,定义一个类,用来说明Tool函数的输入格式

class HvacControlInput(BaseModel):
    entity_id: str
    input_dict: dict

1.2. Tool功能函数

通过调用Home Assistant的API控制空调函数

def hvac_control(entity_id, input_dict:dict):
    data = {"entity_id": entity_id
            }
    operation = input_dict['operation']
    if input_dict.get("hvac_mode"):
        data["hvac_mode"] = input_dict.get("hvac_mode")
    if input_dict.get("temperature"):
        data["temperature"] = input_dict.get("temperature")
    if input_dict.get("fan_mode"):
        data["fan_mode"] = input_dict.get("fan_mode")
    p = json.dumps(data)
    domain = entity_id.split(".")[0]
    s = "/api/services/" + domain + "/"
    url_s = hass_url + ":" + hass_port + s + operation
    logger.info(f"url_s: {url_s}, data: {p}")
    request = requests.post(url_s, headers=hass_headers, data=p)
    if format(request.status_code) == "200" or \
        format(request.status_code) == "201": 
        return True
    else:
        logger.error(format(request))
        return False

在Home Assistant中,你可以在开发者模式的模块中,查看每种不同设备的调用测试指令。然后可以去Home Assistant 的官方文档,查看具体的调用API接口说明。 这里的空调控制,是我在小爱音箱中配置的红外遥控器。

有一种查看每个控制指令API最直接的办法,浏览器登录你的Home Assistant,打开开发者模式,选择控制台选项,然后去点击你要实现的控制按钮或者开关,看下控制台输出,结合Home Assistant的API接口文档,就能知道这个控制指令,接口地址跟参数是什么样的。

1.3. 构建Tool类

使用StructuredTool构建一个Tool类

hvac_control_tool = StructuredTool(
    name="hvac_control",
    description="""Control the hvac. input: entity_id: str, input_dict: dict, output: bool. input_dict include: operation must in (set_hvac_mode, set_fan_mode, set_temperature), 
    hvac_mode must in (off, auto, cool, heat, dry, fan_only), temperature(int type), fan_mode must in ('Fan Speed Down', 'Fan Speed Up'), You must choose at least one operation and Pass the corresponding parameter (ONLY ONE) as needed.
    """,
    func=hvac_control,
    args_schema=HvacControlInput
)

name写为你给这个函数命的名;

description中,需要说明使用场景,以及入参格式;如果不是使用的专门中文优化的LLM,尽量使用兼容性最高的英语写Prompt;

2.创建一个Agent

首先是Agent类型的选择,Langchain中,有几个预定义的Agent类型,比如Structured Chat Agent, ReAct Agent等。本文主要使用的Structured Chat Agent实现,目前还没有很深入研究他们的具体效果差距,从官网看,这个Agent类型,兼容性更高一点,可以支持multi-input。使用Chat实现的,估计能够更好的支持更多的其他模型。

详细见:https://python.langchain.com/docs/modules/agents/agent_types/

2.1. 导入必要的包


import os
from loguru import logger
os.environ["OPENAI_API_VERSION"] = "2023-05-15"
from langchain import hub
from langchain.agents import AgentExecutor, create_openai_tools_agent, create_structured_chat_agent
from langchain_openai import AzureChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents import tool, Tool, initialize_agent, load_tools
from config.conf import config, ADDITION_SYSTEM_MESSAGE
from .tools import brightness_control_tool, feeder_out_tool, get_attr_tool, hvac_control_tool, openweathermap_api

根据需要,你可以导入自己能用的LLM模型包,并且配置好对应的AK。

2.2. 创建Agent

创建一个类,用来实现Agent的逻辑:

class Agents(object):
    
    def __init__(self):
        self.langchain_init()
        
    def langchain_init(self):
        self.llm = None
        llm_provider = config['llm']['provider']
        if llm_provider == 'azure':
            api_key = config['llm']['azure']['api_key']
            endpoint = config['llm']['azure']['endpoint']
            deployment = config['llm']['azure']['deployment']
            self.llm = AzureChatOpenAI(azure_deployment=deployment, api_key=api_key, azure_endpoint=endpoint)
        structured_chat_prompt = hub.pull("hwchase17/structured-chat-agent")
        structured_chat_system = structured_chat_prompt.messages[0].prompt.template
        structured_chat_human = structured_chat_prompt.messages[2].prompt.template
        prompt = ChatPromptTemplate.from_messages([
            ('system', structured_chat_system+ ADDITION_SYSTEM_MESSAGE),
            structured_chat_human
            ]
        )
        
        internal_tools = load_tools(["google-search"], self.llm)
        
        
        tools = [brightness_control_tool, feeder_out_tool, get_attr_tool, hvac_control_tool, openweathermap_api
                 ] + internal_tools
        agent = create_structured_chat_agent(self.llm, tools, prompt)
        self.agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, max_iterations=3, handle_parsing_errors=True)
        self.device_dict = config['agent_config']['hass']['entity_ids']

    def handle(self, text):
        handle_result = self.agent_executor.invoke({"input": f"{text}", "device_list": self.device_dict,
                                                    "location": config['location'], 
                                                    "language": f"{config['agent_config']['language']}"})
        output_text = handle_result["output"]
        return output_text

其中,llm就是基础模型,这里使用的是Azure的OpenAI服务,你可以根据自己的需求,选择其他的LLM模型。

prompt是对话的模板,这里使用的是一个预定义的模板,你可以根据自己的需求,自定义一个模板。 我在这修改了prompt,通过ADDITION_SYSTEM_MESSAGE,添加了一些额外的prompt,里面主要包含了我的智能家居设备,以及一些说明,确保模型能正确使用。

通过create_structured_chat_agent这个即可创建一个Agent,传入对应的参数,tools是一个列表,里面包含了一些工具函数,用来供LLM调用。

最后在handle中,通过agent_executor.invoke调用agent,并且传入对应的参数,得到的就是agent的输出。

可以测试下以上Agent的效果:

agent = Agents()
text = "杭州天气怎么样"
response = agent.handle(text)

不出意外,能够看到日志显示调用过程,以及返回的结果。

如果这一步没有问题,则说明Agent已经跑通,后续如果有其他想法或者需求,可以在这个基础上进行修改,一般情况下,按照相同的方式,增加一个新的Tool即可,如果有必要,可以修改ADDITON_SYSTEM_MESSAGE,增加一些专门的Prompt信息。

有了上面的基础,接下来,只要实现监听麦克风,进行语音识别后,把识别的文本传入Agent,然后把Agent的输出传入TTS服务,就可以实现一个简单的语音管家了。

3. 语音功能

语音功能主要分成四个部分:关键词唤醒(Keyword)、语音识别(ASR)、语音合成(TTS)、播放音频。

我这里前两者使用的是微软的服务,后者使用的是Edge-TTS。

微软Azure的关键词模型是免费的,语音识别每个月有一定的免费额度,超出会收费,具体可查看Azure官网文档,Edge-TTS是开源的Python包,可以直接白嫖。

3.1. 微软关键词唤醒

微软的关键词唤醒模型,可以用来唤醒语音管家,具体的使用方法,可以参考这个文档:Custom keyword recognition basics - Azure Cognitive Services | Microsoft Docsopen in new window 参考上面官方文档,创建一个模型后,等待一段时间,把生成的模型下载下来,解压会得到一个.table的模型文件,这个模型文件,就是后续要用到的。

我的code如下:

    def listen_keyword(self):
        """Listen for a specific keyword to activate speech recognition"""
        logger.info("[AZURE SPEECH RECOGNITION]: Listening for wakeup keyword")
        try:
            result = self.keyword_recognizer.recognize_once_async(model=self.keyword_model).get()
            if not result:
                return False
            if result.reason == speechsdk.ResultReason.RecognizedKeyword:
                logger.info("[AZURE SPEECH RECOGNITION]: Wakeup word detected")
                return True
            elif result.reason == speechsdk.ResultReason.NoMatch:
                nomatch_detail = result.no_match_details
                logger.info(f"[AZURE SPEECH RECOGNITION]: No match found: {nomatch_detail}")
            elif result.reason == speechsdk.ResultReason.Canceled:
                cancellation_details = result.cancellation_details
                logger.info(f"[AZURE SPEECH RECOGNITION]: Cancellation reason: {cancellation_details.reason}")
                if cancellation_details.reason == speechsdk.CancellationReason.Error:
                    logger.info(f"[AZURE SPEECH RECOGNITION]: Error details: {cancellation_details.error_details}")
        except Exception as e:
            logger.error(f"An error occurred: {e}", exc_info=True)
        return False

3.2. 微软语音识别

微软的语音识别服务,可以用来把语音转成文本,具体的使用方法,可以参考这个文档:Get started with speech-to-text - Azure Cognitive Services | Microsoft Docsopen in new window

code如下:

    def listen_speech(self):
        """Listen for speech input"""
        logger.info("[AZURE SPEECH RECOGNITION]: Listening for input")
        try:
            result = self.speech_recognizer.recognize_once_async().get()
            if result.reason == speechsdk.ResultReason.RecognizedSpeech:
                self.play(self.pass_file)
                logger.info(f"[AZURE SPEECH RECOGNITION]: Recognized speech: {result.text}")
                return result.text
            elif result.reason == speechsdk.ResultReason.NoMatch:
                nomatch_detail = result.no_match_details
                logger.info(f"[AZURE SPEECH RECOGNITION]: No match found: {nomatch_detail}")
            elif result.reason == speechsdk.ResultReason.Canceled:
                cancellation_details = result.cancellation_details
                logger.info(f"[AZURE SPEECH RECOGNITION]: Cancellation reason: {cancellation_details.reason}")
                if cancellation_details.reason == speechsdk.CancellationReason.Error:
                    logger.info(f"[AZURE SPEECH RECOGNITION]: Error details: {cancellation_details.error_details}")
        except Exception as e:
            logger.error(f"An error occurred: {e}", exc_info=True)
        return ""

3.3. Edge-TTS

Edge-TTS是一个开源的Python包,可以用来把文本转成语音,可以参考Github仓库:rany2/edge-ttsopen in new window

code如下:

class EdgeTTS():
    """
    edge-tts 引擎
    voice: 发音人,默认是 zh-CN-XiaoxiaoNeural
        全部发音人列表:命令行执行 edge-tts --list-voices 可以打印所有语音
    """

    def __init__(self, **args):
        self.voice = config['tts']['edge-tts']['voice_name']

    async def async_get_speech(self, phrase):
        try:
            os.makedirs(config['tmp_path'], exist_ok=True)
            tmpfile = os.path.join(config["tmp_path"], uuid.uuid4().hex + ".mp3")
            tts = edge_tts.Communicate(text=phrase, voice=self.voice)
            await tts.save(tmpfile)    
            logger.info(f"EdgeTTS Speech Synthesis Success! Path: {tmpfile}")
            return tmpfile
        except Exception as e:
            logger.error(f"EdgeTTS Speech Synthesis Failed: {str(e)}", exc_info=True)
            return None

    def get_speech(self, phrase):
        event_loop = asyncio.new_event_loop()
        tmpfile = event_loop.run_until_complete(self.async_get_speech(phrase))
        event_loop.close()
        return tmpfile

3.4. 播放音频

这部分,使用pyaudio实现,并且用线程的方式,支持异步播放音频,并且可以打断。

import threading
from speech.tools import check_and_delete
import pyaudio
import threading
from pydub import AudioSegment
from pydub.utils import make_chunks
from time import sleep
from loguru import logger

class AudioPlayer:
    def __init__(self) -> None:
        self._bplaying = True
        self._playing = False
        self._audioGate = threading.Event()
        self._audioGate.set()
    def playSound( self, audioFilePath , delete=False):
        try:
            audioThread = threading.Thread( target=self._playSound, args=(audioFilePath, delete) )
            audioThread.start()
        except Exception as e:
            logger.error(f"播放音频失败:{str(e)}")
            return False

    def _playSound(self, audioFilePath, delete=False, volume=100.0):
        self._audioGate.wait()
        self._audioGate.clear()
        try:
            audio = pyaudio.PyAudio()
            if not audioFilePath:
                return True
            sound = AudioSegment.from_file(audioFilePath)
            self._audioGate.set()
            stream = audio.open(format = audio.get_format_from_width(sound.sample_width),
                channels = sound.channels,
                rate = sound.frame_rate,
                output = True)
            
            self._playing = True
            start = 0
            play_time = start
            length = sound.duration_seconds
            playchunk = sound[start*1000.0:(start+length)*1000.0] - (60 - (60 * (volume/100.0)))
            millisecondchunk = 50 / 1000.0
            self._bplaying = True
            for chunks in make_chunks(playchunk, millisecondchunk*1000):
                if not self._bplaying:
                    break
                play_time += millisecondchunk
                stream.write(chunks._data)
                if play_time >= start+length:
                    break
            stream.close()
            audio.terminate()
            self._playing = False
            if delete:
                check_and_delete(audioFilePath)
            logger.info(f"播放完成:{audioFilePath}")
            return True
        except Exception as e:
            logger.error(f"播放音频失败:{str(e)}")
            return False
    def stop(self):
        self._bplaying = False
        
    def is_playing(self):
        return self._playing

注意看_audioGate这个变量,一定要这么设置,不然在重复使用几次后必然出bug,确保同一时刻只有一个音频线程在运行,防止冲突。

最初我是用wzpan大佬wukong-robotopen in new window项目的音频播放代码,但是调试发现与微软的关键词唤醒监听存在冲突,目前原因不明,所以就自己写了一个播放音频的类。

入口类

把上述实现的Agent,语音等模块综合起来,放在一个类中,并且循环监听麦克风,就完成了。

# import os
from loguru import logger
import signal 
import time
from speech.player import AudioPlayer
import azure.cognitiveservices.speech as speechsdk
from agents import Agents
from speech.tts import EdgeTTS
from config.conf import config

Sentry = True
def SignalHandler_SIGINT(SignalNumber,Frame):
   global Sentry 
   Sentry = False
   
signal.signal(signal.SIGINT,SignalHandler_SIGINT)

class Moss:
    def __init__(self) -> None:
        self.subscription_key = config['asr']['azure']['speech_key']
        self.region = config['asr']['azure']['speech_region']
        self.model_path = config['keyword']['azure']['model']
        self.speech_config = speechsdk.SpeechConfig(subscription=self.subscription_key, region=self.region)
        self.speech_config.speech_recognition_language = config['asr']['azure']['language']
        self.keyword_recognizer = speechsdk.KeywordRecognizer()
        self.keyword_model = speechsdk.KeywordRecognitionModel(self.model_path)
        self.speech_recognizer = speechsdk.SpeechRecognizer(speech_config=self.speech_config)
        self.player = AudioPlayer()
        self.agent = Agents()
        self.tts = EdgeTTS()
        self.ttrigger_file = "assets/media/click.mp3"
        self.error_file = "assets/media/error.mp3"
        self.pass_file = "assets/media/pass.mp3"

    def handle(self, text):
        """Handle recognized speech and respond using text-to-speech"""
        logger.info(f"[AZURE SPEECH RECOGNITION]: Recognized speech: {text}")
        try:
            response = self.agent.handle(text)
            logger.info(f"[Agent Response]: Received response: {response}")
            logger.info("[TTS]: Starting Text To Speech")
            tmpfile = self.tts.get_speech(response)
            self.play(tmpfile, delete=True)
        except Exception as e:
            logger.error(f"An error occurred: {e}", exc_info=True)
            logger.info(f"[TTS]: Error occurred while processing text: {text}")
            self.play(self.error_file)
            
    
    def listen_speech(self):
        """Listen for speech input"""
        logger.info("[AZURE SPEECH RECOGNITION]: Listening for input")
        try:
            result = self.speech_recognizer.recognize_once_async().get()
            if result.reason == speechsdk.ResultReason.RecognizedSpeech:
                self.play(self.pass_file)
                logger.info(f"[AZURE SPEECH RECOGNITION]: Recognized speech: {result.text}")
                return result.text
            elif result.reason == speechsdk.ResultReason.NoMatch:
                nomatch_detail = result.no_match_details
                logger.info(f"[AZURE SPEECH RECOGNITION]: No match found: {nomatch_detail}")
            elif result.reason == speechsdk.ResultReason.Canceled:
                cancellation_details = result.cancellation_details
                logger.info(f"[AZURE SPEECH RECOGNITION]: Cancellation reason: {cancellation_details.reason}")
                if cancellation_details.reason == speechsdk.CancellationReason.Error:
                    logger.info(f"[AZURE SPEECH RECOGNITION]: Error details: {cancellation_details.error_details}")
        except Exception as e:
            logger.error(f"An error occurred: {e}", exc_info=True)
        return ""
    
    def listen_keyword(self):
        """Listen for a specific keyword to activate speech recognition"""
        logger.info("[AZURE SPEECH RECOGNITION]: Listening for wakeup keyword")
        try:
            result = self.keyword_recognizer.recognize_once_async(model=self.keyword_model).get()
            if not result:
                return False
            if result.reason == speechsdk.ResultReason.RecognizedKeyword:
                logger.info("[AZURE SPEECH RECOGNITION]: Wakeup word detected")
                return True
            elif result.reason == speechsdk.ResultReason.NoMatch:
                nomatch_detail = result.no_match_details
                logger.info(f"[AZURE SPEECH RECOGNITION]: No match found: {nomatch_detail}")
            elif result.reason == speechsdk.ResultReason.Canceled:
                cancellation_details = result.cancellation_details
                logger.info(f"[AZURE SPEECH RECOGNITION]: Cancellation reason: {cancellation_details.reason}")
                if cancellation_details.reason == speechsdk.CancellationReason.Error:
                    logger.info(f"[AZURE SPEECH RECOGNITION]: Error details: {cancellation_details.error_details}")
        except Exception as e:
            logger.error(f"An error occurred: {e}", exc_info=True)
        return False
    
    def interrupt(self):
        if self.player and self.player.is_playing():
            self.player.stop()
            
    def play(self, src, delete=False):
        """播放一个音频"""
        if self.player:
            self.interrupt()
        self.player.playSound(src, delete)

    def loop(self):
        global Sentry
        while Sentry:
            try:
                keyword_result = self.listen_keyword()
                if keyword_result:
                    self.play(self.ttrigger_file)
                    speech_result = self.listen_speech()
                    if speech_result:
                        self.handle(speech_result)
            except Exception as e:
                logger.error(f"An error occurred: {e}", exc_info=True)

if __name__ == "__main__":
    moss = Moss()
    moss.loop()

Docker部署

最后,把这个项目打包成Docker镜像,部署到树莓派上,就可以实现一个智能语音管家了。

Dockerfile如下:

FROM python:3.11-bullseye

RUN apt-get update -y && apt-get install -y portaudio19-dev python3-pyaudio sox pulseaudio libsox-fmt-all ffmpeg wget libpcre3 libpcre3-dev libatlas-base-dev python3-dev build-essential libssl-dev ca-certificates libasound2 && \
    echo "==> Clean up..."  && \
    apt-get clean  && \
    rm -rf /var/lib/apt/lists/*

WORKDIR /moss
COPY requirements.txt /moss
RUN pip install --no-cache-dir -U pip && pip install --no-cache-dir -r /moss/requirements.txt
COPY . /moss
CMD ["python", "app.py"]

在有音频设备的Linux上,可以使用这个命令运行docker

docker run -itd \
--device /dev/snd \
--name moss \
-e PULSE_SERVER=unix:${XDG_RUNTIME_DIR}/pulse/native \
-v ${XDG_RUNTIME_DIR}/pulse/native:${XDG_RUNTIME_DIR}/pulse/native \
-v ~/.config/pulse/cookie:/root/.config/pulse/cookie \
-v ./config/config.yml:/moss/config/config.yml \
-v /etc/localtime:/etc/localtime:ro \
--restart unless-stopped \
moss:latest

总结

大功告成,你可以对着麦克风喊前面自定义的唤醒词,然后说出你的问题,语音管家就会回答你了。如果你有Home Assistant,可以通过语音控制智能家居。还可以自己扩展函数,实现更多功能。

这篇文章主要是讲述了如何使用Langchain的Agent,结合微软的语音服务,实现一个简单的语音管家。如果你有更多的需求,可以自巳扩展,比如增加更多的工具函数,增加更多的LLM模型,增加更多的语音服务等等。

本文作为抛砖引玉,希望能够帮助到大家,如果有问题,欢迎留言,我会尽力解答。

如果觉得本仓库帮到了你,欢迎点个star。

如果你在使用过程中,发现了问题,欢迎提issue,或者直接提PR。

Comments
  • Latest
  • Oldest
  • Hottest
Powered by Waline v3.1.3