MediaTek DaVinci Assistant API 介紹
MediaTek DaVinci 推出 Assistant API,讓您在達哥平台上開發的 Assistant 可以串接進各式各樣的環境當中,進而達到達哥 Assistant 可以在不同環境、裝置服務您的需求。
如何分享製作好的 Assistant
當我們在達哥上建立好 Assistant 後,我們有兩種方式將 Assistant 分享給其他使用者使用:
Share 該 Assistant 給對方
提供該 Assistant 的『Assistant ID 』與『發自 API Key 』給對方
若想收回該 Assistant 的服務時,僅在自己的 API Key 面板上刪除該 API Key 即可
使用 https://www.gradio.app/playground?demo=Hello_World&code=IyBQbGVhc2UgdXBkYXRlIHlvdXIgYXNzaXN0YW50IGlkIGFuZCBhcGkga2V5IGhlcmU6CkFQSV9LRVkgPSAiIgpBU1NJU1RBTlRfSUQgPSAiIgpBU1NJU1RBTlRfQVBJID0gImh0dHBzOi8vcHJvZC5kdmNib3QubmV0L2FwaS9hc3N0cy92MSIKCmltcG9ydCBtaWNyb3BpcDsgYXdhaXQgbWljcm9waXAuaW5zdGFsbCgnb3BlbmFpPT0xLjM5LjAnKTsgZnJvbSBweW9kaWRlLmh0dHAgaW1wb3J0IHB5ZmV0Y2g7IGltcG9ydCBodHRweDsgaW1wb3J0IGdyYWRpbyBhcyBncgpmcm9tIG9wZW5haSBpbXBvcnQgQXN5bmNPcGVuQUkKZnJvbSBkYXRldGltZSBpbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGpzb24KCmNsYXNzIFRyYW5zcG9ydChodHRweC5Bc3luY0Jhc2VUcmFuc3BvcnQpOgogICAgYXN5bmMgZGVmIGhhbmRsZV9hc3luY19yZXF1ZXN0KHNlbGYsIHJlcXVlc3Q6IGh0dHB4LlJlcXVlc3QpOgogICAgICAgIHJlc3AgPSBhd2FpdCBweWZldGNoKHN0cihyZXF1ZXN0LnVybCksIG1ldGhvZD1yZXF1ZXN0Lm1ldGhvZCwgaGVhZGVycz1kaWN0KHJlcXVlc3QuaGVhZGVycy5pdGVtcygpKSwgYm9keT1qc29uLmR1bXBzKGpzb24ubG9hZHMocmVxdWVzdC5jb250ZW50KSwgZW5zdXJlX2FzY2lpPUZhbHNlKS5lbmNvZGUoKSBpZiByZXF1ZXN0Lm1ldGhvZCAhPSAnR0VUJyBhbmQgcmVxdWVzdC5tZXRob2QgIT0gJ0RFTEVURScgZWxzZSBOb25lKQogICAgICAgIHJldHVybiBodHRweC5SZXNwb25zZShyZXNwLnN0YXR1cywgaGVhZGVycz1yZXNwLmhlYWRlcnMsIHN0cmVhbT1odHRweC5CeXRlU3RyZWFtKGF3YWl0IHJlc3AuYnl0ZXMoKSkpCgpjbGllbnQgPSBBc3luY09wZW5BSShiYXNlX3VybD1BU1NJU1RBTlRfQVBJLCBhcGlfa2V5PUFQSV9LRVksIGh0dHBfY2xpZW50PWh0dHB4LkFzeW5jQ2xpZW50KHRyYW5zcG9ydD1UcmFuc3BvcnQoKSkpCmlmIF9fbmFtZV9fID09ICJfX21haW5fXyI6CiAgICBhc3luYyBkZWYgc2VuZF9tZXNzYWdlKG1lc3NhZ2UsIGhpc3RvcnkpOgogICAgICAgIHRocmVhZCA9IGF3YWl0IGNsaWVudC5iZXRhLnRocmVhZHMuY3JlYXRlKG1lc3NhZ2VzPVt7InJvbGUiOiAidXNlciIgaWYgaSA9PSAwIGVsc2UgImFzc2lzdGFudCIsICJjb250ZW50IjogY30gZm9yIHAgaW4gaGlzdG9yeSBmb3IgaSwgYyBpbiBlbnVtZXJhdGUocCldKQogICAgICAgIGF3YWl0IGNsaWVudC5iZXRhLnRocmVhZHMubWVzc2FnZXMuY3JlYXRlKHRocmVhZF9pZD10aHJlYWQuaWQsIHJvbGU9J3VzZXInLCBjb250ZW50PW1lc3NhZ2UpCiAgICAgICAgcnVuID0gYXdhaXQgY2xpZW50LmJldGEudGhyZWFkcy5ydW5zLmNyZWF0ZV9hbmRfcG9sbCh0aHJlYWRfaWQ9dGhyZWFkLmlkLCBhc3Npc3RhbnRfaWQ9QVNTSVNUQU5UX0lELCBhZGRpdGlvbmFsX2luc3RydWN0aW9ucz1mIlxuVGhlIGN1cnJlbnQgdGltZSBpczoge2RhdGV0aW1lLm5vdygpfSIsIHRpbWVvdXQ9Mi4wKQogICAgICAgIHdoaWxlIHJ1bi5zdGF0dXMgPT0gJ3JlcXVpcmVzX2FjdGlvbicgYW5kIHJ1bi5yZXF1aXJlZF9hY3Rpb246CiAgICAgICAgICAgIG91dHB1dHMgPSBbXQogICAgICAgICAgICBmb3IgY2FsbCBpbiBydW4ucmVxdWlyZWRfYWN0aW9uLnN1Ym1pdF90b29sX291dHB1dHMudG9vbF9jYWxsczoKICAgICAgICAgICAgICAgIHJlc3AgPSBhd2FpdCBjbGllbnQuX2NsaWVudC5wb3N0KEFTU0lTVEFOVF9BUEkrJy9wbHVnaW5hcGknLCBwYXJhbXM9eyJ0aWQiOiB0aHJlYWQuaWQsICJhaWQiOiBBU1NJU1RBTlRfSUQsICJwaWQiOiBjYWxsLmZ1bmN0aW9uLm5hbWV9LCBoZWFkZXJzPXsiQXV0aG9yaXphdGlvbiI6ICJCZWFyZXIgIiArIEFQSV9LRVl9LCBqc29uPWpzb24ubG9hZHMoY2FsbC5mdW5jdGlvbi5hcmd1bWVudHMpKQogICAgICAgICAgICAgICAgb3V0cHV0cy5hcHBlbmQoeyJ0b29sX2NhbGxfaWQiOiBjYWxsLmlkLCAib3V0cHV0IjogcmVzcC50ZXh0Wzo4MDAwXX0pCiAgICAgICAgICAgIHJ1biA9IGF3YWl0IGNsaWVudC5iZXRhLnRocmVhZHMucnVucy5zdWJtaXRfdG9vbF9vdXRwdXRzX2FuZF9wb2xsKHJ1bl9pZD1ydW4uaWQsIHRocmVhZF9pZD10aHJlYWQuaWQsIHRvb2xfb3V0cHV0cz1vdXRwdXRzLCB0aW1lb3V0PTIuMCkKICAgICAgICBpZiBydW4uc3RhdHVzID09ICdmYWlsZWQnIGFuZCBydW4ubGFzdF9lcnJvcjoKICAgICAgICAgICAgcmV0dXJuIHJ1bi5sYXN0X2Vycm9yLm1vZGVsX2R1bXBfanNvbigpCiAgICAgICAgbXNncyA9IGF3YWl0IGNsaWVudC5iZXRhLnRocmVhZHMubWVzc2FnZXMubGlzdCh0aHJlYWRfaWQ9dGhyZWFkLmlkLCBvcmRlcj0nZGVzYycpCiAgICAgICAgYXdhaXQgY2xpZW50LmJldGEudGhyZWFkcy5kZWxldGUodGhyZWFkX2lkPXRocmVhZC5pZCkKICAgICAgICByZXR1cm4gbXNncy5kYXRhWzBdLmNvbnRlbnRbMF0udGV4dC52YWx1ZQogICAgZGVtbyA9IGdyLkNoYXRJbnRlcmZhY2Uoc2VuZF9tZXNzYWdlKQogICAgZGVtby5sYXVuY2goKQo= 進行 preview 測試
當我們在達哥平台上創建完 Assistant 時,若想進行 preview 測試,我們提供與 gradio 串接的 sample code。以下為教學步驟:
取得 User API key:
點選達哥面板左下角的
Settings
按鈕點選
Assistant API Key
後,點選+ API Key
按鈕新增複製
API Key
取得 Assistant ID:
選取欲 preview 的 Assistant,點選
Setting
按鈕選取
Advanced
tab,複製Assistant ID
Demo
Text:
Image:
替換掉對應的
API_KEY
與ASSISTANT_ID
在輸入框輸入 image url 即可
如果您想要使用本機影像,您可以使用下列 Python 程式碼將它轉換成 base64,以便將其傳遞至 API。 或者您可以使用線上工具將影像檔轉成 base64。
import base64 from mimetypes import guess_type # Function to encode a local image into data URL def local_image_to_data_url(image_path): # Guess the MIME type of the image based on the file extension mime_type, _ = guess_type(image_path) if mime_type is None: mime_type = 'application/octet-stream' # Default MIME type if none is found # Read and encode the image file with open(image_path, "rb") as image_file: base64_encoded_data = base64.b64encode(image_file.read()).decode('utf-8') # Construct the data URL return f"data:{mime_type};base64,{base64_encoded_data}" # Example usage image_path = '<path_to_image>' data_url = local_image_to_data_url(image_path) print("Data URL:", data_url)
得到 base64 字串後,在輸入框輸入以下格式即可
"data:image/jpeg;base64,<your_image_data>"
Gradio Sample code (for backup),如果上面 Gradio 連結打不開,可以把這段 paste 進 Gradio playground
# Please update your assistant id and api key here: API_KEY = "" ASSISTANT_ID = "" ASSISTANT_API = "https://prod.dvcbot.net/api/assts/v1" import micropip; await micropip.install('openai==1.39.0'); from pyodide.http import pyfetch; import httpx; import gradio as gr from openai import AsyncOpenAI from datetime import datetime import json class Transport(httpx.AsyncBaseTransport): async def handle_async_request(self, request: httpx.Request): resp = await pyfetch(str(request.url), method=request.method, headers=dict(request.headers.items()), body=json.dumps(json.loads(request.content), ensure_ascii=False).encode() if request.method != 'GET' and request.method != 'DELETE' else None) return httpx.Response(resp.status, headers=resp.headers, stream=httpx.ByteStream(await resp.bytes())) client = AsyncOpenAI(base_url=ASSISTANT_API, api_key=API_KEY, http_client=httpx.AsyncClient(transport=Transport())) if __name__ == "__main__": async def send_message(message, history): thread = await client.beta.threads.create(messages=[{"role": "user" if i == 0 else "assistant", "content": c} for p in history for i, c in enumerate(p)]) await client.beta.threads.messages.create(thread_id=thread.id, role='user', content=message) run = await client.beta.threads.runs.create_and_poll(thread_id=thread.id, assistant_id=ASSISTANT_ID, additional_instructions=f"\nThe current time is: {datetime.now()}", timeout=2.0) while run.status == 'requires_action' and run.required_action: outputs = [] for call in run.required_action.submit_tool_outputs.tool_calls: resp = await client._client.post(ASSISTANT_API+'/pluginapi', params={"tid": thread.id, "aid": ASSISTANT_ID, "pid": call.function.name}, headers={"Authorization": "Bearer " + API_KEY}, json=json.loads(call.function.arguments)) outputs.append({"tool_call_id": call.id, "output": resp.text[:8000]}) run = await client.beta.threads.runs.submit_tool_outputs_and_poll(run_id=run.id, thread_id=thread.id, tool_outputs=outputs, timeout=2.0) if run.status == 'failed' and run.last_error: return run.last_error.model_dump_json() msgs = await client.beta.threads.messages.list(thread_id=thread.id, order='desc') await client.beta.threads.delete(thread_id=thread.id) return msgs.data[0].content[0].text.value demo = gr.ChatInterface(send_message) demo.launch()
CUSTOMIZED_PARAMS 使用,傳遞客製化參數進 plugin
如果你有需要透過 plugin-api 傳客製化參數的需求,可以將第 25 行呼叫 plugin-api 的代碼做以下變動。
customized_params 必須是一個合法的 json。
// 原本第 25 行呼叫 plugin-api 的代碼 resp = await client._client.post(ASSISTANT_API+'/pluginapi', params={"tid": thread.id, "aid": ASSISTANT_ID, "pid": call.function.name}, headers={"Authorization": "Bearer " + API_KEY}, json=json.loads(call.function.arguments)) //做以下修改帶入客製化參數 my_customized_params = """{"msg":"this is a customized params for demo"}""" resp = await client._client.post(ASSISTANT_API+'/pluginapi', params={"tid": thread.id, "aid": ASSISTANT_ID, "pid": call.function.name, "params": my_customized_params}, headers={"Authorization": "Bearer " + API_KEY}, json=json.loads(call.function.arguments))
在 plugin 中就可以以 “CUSTOMIZED_PARAMS” 這個變數來取得傳入的客製化參數。
以 python plugin 為例可以參考 Python Plugin 說明。
MediaTek DaVinci Assistant API 使用教學
Curl
常用 API curl 範例
請注意,如果不是用 sandbox 進行 PoC,而是自行部屬,請將所有 api 使用中的 url “https://prod.dvcbot.net“ 修改成自行部屬的 url。
create thread
curl https://prod.dvcbot.net/api/assts/v1/threads \ -H "Authorization: ${YOUR API KEY}" \ -H 'Content-Type: application/json' \ -d ''
create msg
curl https://prod.dvcbot.net/api/assts/v1/threads/{thread_id}/messages \ -H "Authorization: ${YOUR API KEY}" \ -H 'Content-Type: application/json' \ -d '{ "role": "user", "content": "How does AI work? Explain it in simple terms." }'
create run (non streaming)
curl https://prod.dvcbot.net/api/assts/v1/threads/{thread_id}/runs \ -H "Authorization: ${YOUR API KEY}" \ -H 'Content-Type: application/json' \ -d '{ "assistant_id": "asst_abc123" }'
call plugin api
curl https://prod.dvcbot.net/api/assts/v1/pluginapi?tid={thread_id}&aid={assistant_id}&pid={function_name} \ -H "Authorization: ${YOUR API KEY}" \ -H 'Content-Type: application/json' \ -d ' # function arguments'
submit tool outputs to run
curl https://prod.dvcbot.net/api/assts/v1/threads/{thread_id}/runs/{run_id}/submit_tool_outputs \ -H "Authorization: ${YOUR API KEY}" \ -H 'Content-Type: application/json' \ -d '{ "tool_outputs": [ { "tool_call_id": "call_abc123", "output": "28C" } ] }'
Wrap up: use curl to write a script
將上面常用的 curl api 組合在一起,即可用 bash 完成一個陽春版的達哥對話腳本:
要先拿到 Assistant ID & User API key
export
API_KEY
、ASSISTANT_ID
以及你的INPUT_MSG
export ASSISTANT_ID="YOUR ASSISTANT ID" export API_KEY="YOUR API KEY" export INPUT_MSG="YOUR MESSAGE TO ASSISTANT"
執行以下腳本 (請先確保環境有安裝 jq 套件)
mac:
brew install jq
ubuntu:
apt-get install jq
cent os:
yum install jq
BASE_URL="https://prod.dvcbot.net/api/assts/v1" # create thread AUTH_HEADER="Authorization: Bearer ${API_KEY}" THREAD_URL="${BASE_URL}/threads" THREAD_ID=`curl -s --location "${THREAD_URL}" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}" \ --data '{}' | jq .id | tr -d '"'` # add msg to thread CREATE_MSG_DATA=$(< <(cat <<EOF { "role": "user", "content": "$INPUT_MSG" } EOF )) MSG_URL="${BASE_URL}/threads/${THREAD_ID}/messages" curl -s --location "${MSG_URL}" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}" \ --data "${CREATE_MSG_DATA}" > /dev/null # run the assistant within thread CREATE_RUN_DATA=$(< <(cat <<EOF { "assistant_id": "$ASSISTANT_ID", "additional_instructions": "The current time is: `date '+%Y-%m-%d %H:%M:%S'`" } EOF )) RUN_URL="${BASE_URL}/threads/${THREAD_ID}/runs" RUN_ID=`curl -s --location "${RUN_URL}" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}" \ --data "${CREATE_RUN_DATA}" | jq .id | tr -d '"'` # get run result RUN_STAUS="" while [[ $RUN_STAUS != "completed" ]] do RESP=`curl -s --location --request GET "${RUN_URL}/$RUN_ID" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}"` RUN_STAUS=`echo $RESP| jq .status | tr -d '"'`; REQUIRED_ACTION=`echo $RESP| jq .required_action` while [[ $RUN_STAUS = "requires_action" ]] && [[ ! -z "$REQUIRED_ACTION" ]] do TOOL_OUTPUTS='[]' LEN=$( echo "$REQUIRED_ACTION" | jq '.submit_tool_outputs.tool_calls | length' ) for (( i=0; i<$LEN; i++ )) do FUNC_NAME=`echo "$REQUIRED_ACTION" | jq ".submit_tool_outputs.tool_calls[$i].function.name" | tr -d '"'` ARGS=`echo "$REQUIRED_ACTION" | jq ".submit_tool_outputs.tool_calls[$i].function.arguments"` ARGS=${ARGS//\\\"/\"} ARGS=${ARGS#"\""} ARGS=${ARGS%"\""} PLUGINAPI_URL="${BASE_URL}/pluginapi?tid=${THREAD_ID}&aid=${ASSISTANT_ID}&pid=${FUNC_NAME}" OUTPUT=`curl -s --location "${PLUGINAPI_URL}" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}" \ --data "${ARGS}"` OUTPUT="${OUTPUT:0:8000}" OUTPUT=${OUTPUT//\"/\\\"} CALL_ID=`echo "$REQUIRED_ACTION" | jq ".submit_tool_outputs.tool_calls[$i].id" | tr -d '"'` TOOL_OUTPUT=$(< <(cat <<EOF { "tool_call_id": "$CALL_ID", "output": "$OUTPUT" } EOF )) TOOL_OUTPUTS=$(jq --argjson obj "$TOOL_OUTPUT" '. += [$obj]' <<< "$TOOL_OUTPUTS") done SUBMIT_TOOL_OUTPUT_RUN_RUL="${BASE_URL}/threads/${THREAD_ID}/runs/${RUN_ID}/submit_tool_outputs" TOOL_OUTPUTS_DATA=$(< <(cat <<EOF { "tool_outputs": $TOOL_OUTPUTS } EOF )) curl -s --location "${SUBMIT_TOOL_OUTPUT_RUN_RUL}" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}" \ --data "${TOOL_OUTPUTS_DATA}" > /dev/null RESP=`curl -s --location --request GET "${RUN_URL}/$RUN_ID" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}"` RUN_STAUS=`echo $RESP| jq .status | tr -d '"'`; sleep 1 done sleep 1 done #list msg RESPONSE_MSG=`curl -s --location --request GET "${MSG_URL}" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}" | jq .data[0].content[].text.value` echo "you: "$INPUT_MSG echo "" echo "davinci bot: "$RESPONSE_MSG
即可看到結果如下
you: "your message here" davinci bot: "response from assistant"
Image
要先拿到 Assistant ID & User API key
export
API_KEY
、ASSISTANT_ID
以及你的INPUT_MSG
export ASSISTANT_ID="YOUR ASSISTANT ID" export API_KEY="YOUR API KEY" export IMAGE_URL="YOUR IMAGE URL HEHE"
IMAGE_URL
格式參考上方 Gradio image 範例
執行以下腳本 (請先確保環境有安裝 jq 套件)
mac:
brew install jq
ubuntu:
apt-get install jq
cent os:
yum install jq
BASE_URL="https://prod.dvcbot.net/api/assts/v1" # create thread AUTH_HEADER="Authorization: Bearer ${API_KEY}" THREAD_URL="${BASE_URL}/threads" THREAD_ID=`curl -s --location "${THREAD_URL}" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}" \ --data '{}' | jq .id | tr -d '"'` # add msg to thread CREATE_MSG_DATA=$(< <(cat <<EOF { "role": "user", "content": [ { "type": "image_url", "image_url": { "url": "$IMAGE_URL" } } ] } EOF )) MSG_URL="${BASE_URL}/threads/${THREAD_ID}/messages" curl -s --location "${MSG_URL}" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}" \ --data "${CREATE_MSG_DATA}" > /dev/null # run the assistant within thread CREATE_RUN_DATA=$(< <(cat <<EOF { "assistant_id": "$ASSISTANT_ID", "additional_instructions": "The current time is: `date '+%Y-%m-%d %H:%M:%S'`" } EOF )) RUN_URL="${BASE_URL}/threads/${THREAD_ID}/runs" RUN_ID=`curl -s --location "${RUN_URL}" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}" \ --data "${CREATE_RUN_DATA}" | jq .id | tr -d '"'` # get run result RUN_STAUS="" while [[ $RUN_STAUS != "completed" ]] do RESP=`curl -s --location --request GET "${RUN_URL}/$RUN_ID" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}"` RUN_STAUS=`echo $RESP| jq .status | tr -d '"'`; REQUIRED_ACTION=`echo $RESP| jq .required_action` while [[ $RUN_STAUS = "requires_action" ]] && [[ ! -z "$REQUIRED_ACTION" ]] do TOOL_OUTPUTS='[]' LEN=$( echo "$REQUIRED_ACTION" | jq '.submit_tool_outputs.tool_calls | length' ) for (( i=0; i<$LEN; i++ )) do FUNC_NAME=`echo "$REQUIRED_ACTION" | jq ".submit_tool_outputs.tool_calls[$i].function.name" | tr -d '"'` ARGS=`echo "$REQUIRED_ACTION" | jq ".submit_tool_outputs.tool_calls[$i].function.arguments"` ARGS=${ARGS//\\\"/\"} ARGS=${ARGS#"\""} ARGS=${ARGS%"\""} PLUGINAPI_URL="${BASE_URL}/pluginapi?tid=${THREAD_ID}&aid=${ASSISTANT_ID}&pid=${FUNC_NAME}" OUTPUT=`curl -s --location "${PLUGINAPI_URL}" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}" \ --data "${ARGS}"` OUTPUT="${OUTPUT:0:8000}" OUTPUT=${OUTPUT//\"/\\\"} CALL_ID=`echo "$REQUIRED_ACTION" | jq ".submit_tool_outputs.tool_calls[$i].id" | tr -d '"'` TOOL_OUTPUT=$(< <(cat <<EOF { "tool_call_id": "$CALL_ID", "output": "$OUTPUT" } EOF )) TOOL_OUTPUTS=$(jq --argjson obj "$TOOL_OUTPUT" '. += [$obj]' <<< "$TOOL_OUTPUTS") done SUBMIT_TOOL_OUTPUT_RUN_RUL="${BASE_URL}/threads/${THREAD_ID}/runs/${RUN_ID}/submit_tool_outputs" TOOL_OUTPUTS_DATA=$(< <(cat <<EOF { "tool_outputs": $TOOL_OUTPUTS } EOF )) curl -s --location "${SUBMIT_TOOL_OUTPUT_RUN_RUL}" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}" \ --data "${TOOL_OUTPUTS_DATA}" > /dev/null RESP=`curl -s --location --request GET "${RUN_URL}/$RUN_ID" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}"` RUN_STAUS=`echo $RESP| jq .status | tr -d '"'`; sleep 1 done sleep 1 done #list msg RESPONSE_MSG=`curl -s --location --request GET "${MSG_URL}" \ --header 'OpenAI-Beta: assistants=v2' \ --header 'Content-Type: application/json' \ --header "${AUTH_HEADER}" | jq .data[0].content[].text.value` echo "" echo "davinci bot: "$RESPONSE_MSG
即可看到結果如下
davinci bot: "response from assistant"
Python
Text or image as Input
import json from openai import OpenAI from datetime import datetime ASSISTANT_API = 'https://prod.dvcbot.net/api/assts/v1' API_KEY = '' client = OpenAI( base_url=ASSISTANT_API, api_key=API_KEY, ) ASSISTANT_ID = '' # 定義多個訊息 messages = [ {"type": "text", "text": "tell me about the image"}, {"type": "image_url", "image_url": {"url": "https://xxx.xxx.xxx.jpg"}}, {"type": "text", "text": "What do you think about this image?"} ] # 建立 thread thread = client.beta.threads.create(messages=[]) # 連續發送訊息 for message in messages: client.beta.threads.messages.create(thread_id=thread.id, role='user', content=[message]) # 執行 assistant run = client.beta.threads.runs.create_and_poll(thread_id=thread.id, assistant_id=ASSISTANT_ID, additional_instructions=f"\nThe current time is: {datetime.now()}", timeout=2.0) while run.status == 'requires_action' and run.required_action: outputs = [] for call in run.required_action.submit_tool_outputs.tool_calls: resp = client._client.post(ASSISTANT_API + '/pluginapi', params={"tid": thread.id, "aid": ASSISTANT_ID, "pid": call.function.name}, headers={"Authorization": "Bearer " + API_KEY}, json=json.loads(call.function.arguments)) outputs.append({"tool_call_id": call.id, "output": resp.text[:8000]}) run = client.beta.threads.runs.submit_tool_outputs_and_poll(run_id=run.id, thread_id=thread.id, tool_outputs=outputs, timeout=2.0) if run.status == 'failed' and run.last_error: print(run.last_error.model_dump_json()) msgs = client.beta.threads.messages.list(thread_id=thread.id, order='desc') client.beta.threads.delete(thread_id=thread.id) print(msgs.data[0].content[0].text.value)
Text & image as input (Streaming)
import asyncio import json import httpx from openai import AsyncOpenAI ASSISTANT_API = "https://prod.dvcbot.net/api/assts/v1" API_KEY = "" ASSISTANT_ID = "" USER_PROMPT = "從一數到一千" async def main(): http_client = httpx.AsyncClient(verify=False) client = AsyncOpenAI(base_url=ASSISTANT_API, api_key=API_KEY, http_client=http_client) thread = await client.beta.threads.create() user_prompt = USER_PROMPT await client.beta.threads.messages.create( thread_id=thread.id, role="user", content=user_prompt, ) stream = await client.beta.threads.runs.create( assistant_id=ASSISTANT_ID, thread_id=thread.id, stream=True, ) requires_action_run_id = "" async for event in stream: if event.event == "thread.message.delta": print(event) elif event.event == "thread.run.requires_action": requires_action_run_id = event.data.id if requires_action_run_id != "": run = await client.beta.threads.runs.retrieve(requires_action_run_id, thread_id=thread.id) outputs = [] for call in run.required_action.submit_tool_outputs.tool_calls: print(f"call plugin {call.function.name} with args: {call.function.arguments}") resp = await client._client.post( ASSISTANT_API + "/pluginapi", params={"tid": thread.id, "aid": ASSISTANT_ID, "pid": call.function.name}, headers={"Authorization": "Bearer " + API_KEY}, json=json.loads(call.function.arguments), timeout=30, ) result = resp.text[:8000] print(f"plugin {call.function.name} result {result}") outputs.append({"tool_call_id": call.id, "output": result}) stream = await client.beta.threads.runs.submit_tool_outputs( run_id=run.id, stream=True, thread_id=thread.id, tool_outputs=outputs, ) async for event in stream: print(event) await client.beta.threads.delete(thread_id=thread.id) if __name__ == "__main__": asyncio.run(main())
Voice mode
(Coming Soon)