跳至橫幅的結尾
前往橫幅的開頭

MediaTek DaVinci Assistant API 介紹

跳至中繼資料的結尾
前往中繼資料的開頭

You are viewing an old version of this content. View the current version.

比較目前 View Version History

« 上一頁 版本 37 下一步 »

MediaTek DaVinci Assistant API 介紹

MediaTek DaVinci 推出 Assistant API,讓您在達哥平台上開發的 Assistant 可以串接進各式各樣的環境當中,進而達到達哥 Assistant 可以在不同環境、裝置服務您的需求。

如何分享製作好的 Assistant

當我們在達哥上建立好 Assistant 後,我們有兩種方式將 Assistant 分享給其他使用者使用:

  1. Share 該 Assistant 給對方

  2. 提供該 Assistant 的『Assistant ID 』與『發自 API Key 』給對方

    1. 若想收回該 Assistant 的服務時,僅在自己的 API Key 面板上刪除該 API Key 即可

使用 gradio 進行 preview 測試

當我們在達哥平台上創建完 Assistant 時,若想進行 preview 測試,我們提供與 gradio 串接的 sample code。以下為教學步驟:

  1. 取得 User API key:

    1. 點選達哥面板左下角 More Actions 按鈕,選取 Settings

      image-20240628-074811.png
    2. 進到 Settings 後,點選 + API Key 按鈕新增

      image-20240628-075118.png
    3. 複製 API Key

      image-20240628-075157.png
  2. 取得 Assistant ID

    1. 選取欲 preview 的 Assistant,點選 Setting 按鈕

      image-20240628-075641.png
    2. 選取 Advanced tab,複製 Assistant ID

      image-20240628-075916.png
  3. Demo

    1. Text:

      1. 點選https://www.gradio.app/playground?demo=Hello_World&code=IyBQbGVhc2UgdXBkYXRlIHlvdXIgYXNzaXN0YW50IGlkIGFuZCBhcGkga2V5IGhlcmU6CkFQSV9LRVkgPSAiIgpBU1NJU1RBTlRfSUQgPSAiIgpBU1NJU1RBTlRfQVBJID0gImh0dHBzOi8vcHJvZC5kdmNib3QubmV0L2FwaS9hc3N0cy92MSIKCmltcG9ydCBtaWNyb3BpcDsgYXdhaXQgbWljcm9waXAuaW5zdGFsbCgnb3BlbmFpPT0xLjM5LjAnKTsgZnJvbSBweW9kaWRlLmh0dHAgaW1wb3J0IHB5ZmV0Y2g7IGltcG9ydCBodHRweDsgaW1wb3J0IGdyYWRpbyBhcyBncgpmcm9tIG9wZW5haSBpbXBvcnQgQXN5bmNPcGVuQUkKZnJvbSBkYXRldGltZSBpbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGpzb24KCmNsYXNzIFRyYW5zcG9ydChodHRweC5Bc3luY0Jhc2VUcmFuc3BvcnQpOgogICAgYXN5bmMgZGVmIGhhbmRsZV9hc3luY19yZXF1ZXN0KHNlbGYsIHJlcXVlc3Q6IGh0dHB4LlJlcXVlc3QpOgogICAgICAgIHJlc3AgPSBhd2FpdCBweWZldGNoKHN0cihyZXF1ZXN0LnVybCksIG1ldGhvZD1yZXF1ZXN0Lm1ldGhvZCwgaGVhZGVycz1kaWN0KHJlcXVlc3QuaGVhZGVycy5pdGVtcygpKSwgYm9keT1qc29uLmR1bXBzKGpzb24ubG9hZHMocmVxdWVzdC5jb250ZW50KSwgZW5zdXJlX2FzY2lpPUZhbHNlKS5lbmNvZGUoKSBpZiByZXF1ZXN0Lm1ldGhvZCAhPSAnR0VUJyBhbmQgcmVxdWVzdC5tZXRob2QgIT0gJ0RFTEVURScgZWxzZSBOb25lKQogICAgICAgIHJldHVybiBodHRweC5SZXNwb25zZShyZXNwLnN0YXR1cywgaGVhZGVycz1yZXNwLmhlYWRlcnMsIHN0cmVhbT1odHRweC5CeXRlU3RyZWFtKGF3YWl0IHJlc3AuYnl0ZXMoKSkpCgpjbGllbnQgPSBBc3luY09wZW5BSShiYXNlX3VybD1BU1NJU1RBTlRfQVBJLCBhcGlfa2V5PUFQSV9LRVksIGh0dHBfY2xpZW50PWh0dHB4LkFzeW5jQ2xpZW50KHRyYW5zcG9ydD1UcmFuc3BvcnQoKSkpCmlmIF9fbmFtZV9fID09ICJfX21haW5fXyI6CiAgICBhc3luYyBkZWYgc2VuZF9tZXNzYWdlKG1lc3NhZ2UsIGhpc3RvcnkpOgogICAgICAgIHRocmVhZCA9IGF3YWl0IGNsaWVudC5iZXRhLnRocmVhZHMuY3JlYXRlKG1lc3NhZ2VzPVt7InJvbGUiOiAidXNlciIgaWYgaSA9PSAwIGVsc2UgImFzc2lzdGFudCIsICJjb250ZW50IjogY30gZm9yIHAgaW4gaGlzdG9yeSBmb3IgaSwgYyBpbiBlbnVtZXJhdGUocCldKQogICAgICAgIGF3YWl0IGNsaWVudC5iZXRhLnRocmVhZHMubWVzc2FnZXMuY3JlYXRlKHRocmVhZF9pZD10aHJlYWQuaWQsIHJvbGU9J3VzZXInLCBjb250ZW50PW1lc3NhZ2UpCiAgICAgICAgcnVuID0gYXdhaXQgY2xpZW50LmJldGEudGhyZWFkcy5ydW5zLmNyZWF0ZV9hbmRfcG9sbCh0aHJlYWRfaWQ9dGhyZWFkLmlkLCBhc3Npc3RhbnRfaWQ9QVNTSVNUQU5UX0lELCBhZGRpdGlvbmFsX2luc3RydWN0aW9ucz1mIlxuVGhlIGN1cnJlbnQgdGltZSBpczoge2RhdGV0aW1lLm5vdygpfSIpCiAgICAgICAgd2hpbGUgcnVuLnN0YXR1cyA9PSAncmVxdWlyZXNfYWN0aW9uJyBhbmQgcnVuLnJlcXVpcmVkX2FjdGlvbjoKICAgICAgICAgICAgb3V0cHV0cyA9IFtdCiAgICAgICAgICAgIGZvciBjYWxsIGluIHJ1bi5yZXF1aXJlZF9hY3Rpb24uc3VibWl0X3Rvb2xfb3V0cHV0cy50b29sX2NhbGxzOgogICAgICAgICAgICAgICAgcmVzcCA9IGF3YWl0IGNsaWVudC5fY2xpZW50LnBvc3QoQVNTSVNUQU5UX0FQSSsnL3BsdWdpbmFwaScsIHBhcmFtcz17InRpZCI6IHRocmVhZC5pZCwgImFpZCI6IEFTU0lTVEFOVF9JRCwgInBpZCI6IGNhbGwuZnVuY3Rpb24ubmFtZX0sIGhlYWRlcnM9eyJBdXRob3JpemF0aW9uIjogIkJlYXJlciAiICsgQVBJX0tFWX0sIGpzb249anNvbi5sb2FkcyhjYWxsLmZ1bmN0aW9uLmFyZ3VtZW50cykpCiAgICAgICAgICAgICAgICBvdXRwdXRzLmFwcGVuZCh7InRvb2xfY2FsbF9pZCI6IGNhbGwuaWQsICJvdXRwdXQiOiByZXNwLnRleHRbOjgwMDBdfSkKICAgICAgICAgICAgcnVuID0gYXdhaXQgY2xpZW50LmJldGEudGhyZWFkcy5ydW5zLnN1Ym1pdF90b29sX291dHB1dHNfYW5kX3BvbGwocnVuX2lkPXJ1bi5pZCwgdGhyZWFkX2lkPXRocmVhZC5pZCwgdG9vbF9vdXRwdXRzPW91dHB1dHMpCiAgICAgICAgaWYgcnVuLnN0YXR1cyA9PSAnZmFpbGVkJyBhbmQgcnVuLmxhc3RfZXJyb3I6CiAgICAgICAgICAgIHJldHVybiBydW4ubGFzdF9lcnJvci5tb2RlbF9kdW1wX2pzb24oKQogICAgICAgIG1zZ3MgPSBhd2FpdCBjbGllbnQuYmV0YS50aHJlYWRzLm1lc3NhZ2VzLmxpc3QodGhyZWFkX2lkPXRocmVhZC5pZCwgb3JkZXI9J2Rlc2MnKQogICAgICAgIGF3YWl0IGNsaWVudC5iZXRhLnRocmVhZHMuZGVsZXRlKHRocmVhZF9pZD10aHJlYWQuaWQpCiAgICAgICAgcmV0dXJuIG1zZ3MuZGF0YVswXS5jb250ZW50WzBdLnRleHQudmFsdWUKICAgIGRlbW8gPSBnci5DaGF0SW50ZXJmYWNlKHNlbmRfbWVzc2FnZSkKICAgIGRlbW8ubGF1bmNoKCkK

      2. 替換掉對應的 API_KEYASSISTANT_ID

        image-20240628-080723.png

      3. 在輸入框輸入即可

        image-20240628-080750.png

    2. Image:

      1. 點選Gradio Playground

      2. 替換掉對應的 API_KEYASSISTANT_ID

        image-20240628-080723.png

      3. 在輸入框輸入 image url 即可

        image-20240628-080750.png

        1. 如果您想要使用本機影像,您可以使用下列 Python 程式碼將它轉換成 base64,以便將其傳遞至 API。 或者您可以使用線上工具將影像檔轉成 base64。

          import base64
          from mimetypes import guess_type
          
          # Function to encode a local image into data URL 
          def local_image_to_data_url(image_path):
              # Guess the MIME type of the image based on the file extension
              mime_type, _ = guess_type(image_path)
              if mime_type is None:
                  mime_type = 'application/octet-stream'  # Default MIME type if none is found
          
              # Read and encode the image file
              with open(image_path, "rb") as image_file:
                  base64_encoded_data = base64.b64encode(image_file.read()).decode('utf-8')
          
              # Construct the data URL
              return f"data:{mime_type};base64,{base64_encoded_data}"
          
          # Example usage
          image_path = '<path_to_image>'
          data_url = local_image_to_data_url(image_path)
          print("Data URL:", data_url)

得到 base64 字串後,在輸入框輸入以下格式即可

"data:image/jpeg;base64,<your_image_data>"

MediaTek DaVinci Assistant API 使用教學

Curl

Text

  1. 要先拿到 Assistant ID & User API key

  2. expot API KEY,ASSISTANT_ID 以及你的 input

    1. export ASSISTANT_ID="YOUR ASSISTANT ID"
      export API_KEY="YOUR API KEY"
      export INPUT_MSG="YOUR MESSAGE TO ASSISTANT"
  3. 執行以下腳本 (請先確保環境有安裝 jq 套件)

    1. mac: brew install jq

    2. ubuntu: apt-get install jq

    3. cent os: yum install jq

  4. BASE_URL="https://prod.dvcbot.net/api/assts/v1"
    
    # create thread
    AUTH_HEADER="Authorization: Bearer ${API_KEY}"
    THREAD_URL="${BASE_URL}/threads"
    THREAD_ID=`curl -s --location "${THREAD_URL}" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}" \
    --data '{}' | jq .id | tr -d '"'`
    
    # add msg to thread
    CREATE_MSG_DATA=$(< <(cat <<EOF
    {
      "role": "user",
      "content": "$INPUT_MSG"
    }
    EOF
    ))
    MSG_URL="${BASE_URL}/threads/${THREAD_ID}/messages"
    curl -s --location "${MSG_URL}" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}" \
    --data "${CREATE_MSG_DATA}" > /dev/null
    
    # run the assistant within thread
    CREATE_RUN_DATA=$(< <(cat <<EOF
    {
      "assistant_id": "$ASSISTANT_ID",
      "additional_instructions": "The current time is: `date '+%Y-%m-%d %H:%M:%S'`"
    }
    EOF
    ))
    
    RUN_URL="${BASE_URL}/threads/${THREAD_ID}/runs"
    RUN_ID=`curl -s --location "${RUN_URL}" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}" \
    --data "${CREATE_RUN_DATA}" | jq .id | tr -d '"'`
    
    # get run result
    RUN_STAUS=""
    while [[ $RUN_STAUS != "completed" ]]
    do
        RESP=`curl -s --location --request GET "${RUN_URL}/$RUN_ID" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}"`
    
        RUN_STAUS=`echo $RESP| jq .status | tr -d '"'`;
        REQUIRED_ACTION=`echo $RESP| jq .required_action`
    
        while [[ $RUN_STAUS = "requires_action" ]] && [[ ! -z "$REQUIRED_ACTION" ]]
        do
            TOOL_OUTPUTS='[]'
            LEN=$( echo "$REQUIRED_ACTION" | jq '.submit_tool_outputs.tool_calls | length' )
            for (( i=0; i<$LEN; i++ ))
            do
                FUNC_NAME=`echo "$REQUIRED_ACTION" | jq ".submit_tool_outputs.tool_calls[$i].function.name" | tr -d '"'`
    
                ARGS=`echo "$REQUIRED_ACTION" | jq ".submit_tool_outputs.tool_calls[$i].function.arguments"`
                ARGS=${ARGS//\\\"/\"}
                ARGS=${ARGS#"\""}
                ARGS=${ARGS%"\""}
    
                PLUGINAPI_URL="${BASE_URL}/pluginapi?tid=${THREAD_ID}&aid=${ASSISTANT_ID}&pid=${FUNC_NAME}"
                OUTPUT=`curl -s --location "${PLUGINAPI_URL}" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}" \
    --data "${ARGS}"`
                OUTPUT="${OUTPUT:0:8000}"
                OUTPUT=${OUTPUT//\"/\\\"}
                CALL_ID=`echo "$REQUIRED_ACTION" | jq ".submit_tool_outputs.tool_calls[$i].id" | tr -d '"'`
                TOOL_OUTPUT=$(< <(cat <<EOF
    {
      "tool_call_id": "$CALL_ID",
      "output": "$OUTPUT"
    }
    EOF
    ))
                TOOL_OUTPUTS=$(jq --argjson obj "$TOOL_OUTPUT" '. += [$obj]' <<< "$TOOL_OUTPUTS")
            done
    
            SUBMIT_TOOL_OUTPUT_RUN_RUL="${BASE_URL}/threads/${THREAD_ID}/runs/${RUN_ID}/submit_tool_outputs"
    
            TOOL_OUTPUTS_DATA=$(< <(cat <<EOF
    {
      "tool_outputs": $TOOL_OUTPUTS
    }
    EOF
    ))
    
            curl -s --location "${SUBMIT_TOOL_OUTPUT_RUN_RUL}" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}" \
    --data "${TOOL_OUTPUTS_DATA}" > /dev/null
    
            RESP=`curl -s --location --request GET "${RUN_URL}/$RUN_ID" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}"`
            RUN_STAUS=`echo $RESP| jq .status | tr -d '"'`;
            sleep 1
        done
        sleep 1
    done
    
    #list msg
    RESPONSE_MSG=`curl -s --location --request GET "${MSG_URL}" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}" | jq .data[0].content[].text.value`
    
    echo "you: "$INPUT_MSG
    echo ""
    echo "davinci bot: "$RESPONSE_MSG
  5. 即可看到結果如下

    you: "your message here"
    davinci bot: "response from assistant"

Image

  1. 要先拿到 Assistant ID & User API key

  2. expot API KEY,ASSISTANT_ID 以及你的 input

    1. export ASSISTANT_ID="YOUR ASSISTANT ID"
      export API_KEY="YOUR API KEY"
      export IMAGE_URL="YOUR IMAGE URL HEHE"
      1. IMAGE_URL 格式參考上方 Gradio image 範例

  3. 執行以下腳本 (請先確保環境有安裝 jq 套件)

    1. mac: brew install jq

    2. ubuntu: apt-get install jq

    3. cent os: yum install jq

  4. BASE_URL="https://prod.dvcbot.net/api/assts/v1"
    # create thread
    AUTH_HEADER="Authorization: Bearer ${API_KEY}"
    THREAD_URL="${BASE_URL}/threads"
    THREAD_ID=`curl -s --location "${THREAD_URL}" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}" \
    --data '{}' | jq .id | tr -d '"'`
    # add msg to thread
    CREATE_MSG_DATA=$(< <(cat <<EOF
    {
      "role": "user",
      "content": [
        {
            "type": "image_url",
            "image_url": {
                "url": "$IMAGE_URL"
            }
        }
      ]
    }
    EOF
    ))
    
    MSG_URL="${BASE_URL}/threads/${THREAD_ID}/messages"
    curl -s --location "${MSG_URL}" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}" \
    --data "${CREATE_MSG_DATA}" > /dev/null
    # run the assistant within thread
    CREATE_RUN_DATA=$(< <(cat <<EOF
    {
      "assistant_id": "$ASSISTANT_ID",
      "additional_instructions": "The current time is: `date '+%Y-%m-%d %H:%M:%S'`"
    }
    EOF
    ))
    RUN_URL="${BASE_URL}/threads/${THREAD_ID}/runs"
    RUN_ID=`curl -s --location "${RUN_URL}" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}" \
    --data "${CREATE_RUN_DATA}" | jq .id | tr -d '"'`
    # get run result
    RUN_STAUS=""
    while [[ $RUN_STAUS != "completed" ]]
    do
        RESP=`curl -s --location --request GET "${RUN_URL}/$RUN_ID" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}"`
        RUN_STAUS=`echo $RESP| jq .status | tr -d '"'`;
        REQUIRED_ACTION=`echo $RESP| jq .required_action`
        while [[ $RUN_STAUS = "requires_action" ]] && [[ ! -z "$REQUIRED_ACTION" ]]
        do
            TOOL_OUTPUTS='[]'
            LEN=$( echo "$REQUIRED_ACTION" | jq '.submit_tool_outputs.tool_calls | length' )
            for (( i=0; i<$LEN; i++ ))
            do
                FUNC_NAME=`echo "$REQUIRED_ACTION" | jq ".submit_tool_outputs.tool_calls[$i].function.name" | tr -d '"'`
                ARGS=`echo "$REQUIRED_ACTION" | jq ".submit_tool_outputs.tool_calls[$i].function.arguments"`
                ARGS=${ARGS//\\\"/\"}
                ARGS=${ARGS#"\""}
                ARGS=${ARGS%"\""}
                PLUGINAPI_URL="${BASE_URL}/pluginapi?tid=${THREAD_ID}&aid=${ASSISTANT_ID}&pid=${FUNC_NAME}"
                OUTPUT=`curl -s --location "${PLUGINAPI_URL}" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}" \
    --data "${ARGS}"`
                OUTPUT="${OUTPUT:0:8000}"
                OUTPUT=${OUTPUT//\"/\\\"}
                CALL_ID=`echo "$REQUIRED_ACTION" | jq ".submit_tool_outputs.tool_calls[$i].id" | tr -d '"'`
                TOOL_OUTPUT=$(< <(cat <<EOF
    {
      "tool_call_id": "$CALL_ID",
      "output": "$OUTPUT"
    }
    EOF
    ))
                TOOL_OUTPUTS=$(jq --argjson obj "$TOOL_OUTPUT" '. += [$obj]' <<< "$TOOL_OUTPUTS")
            done
            SUBMIT_TOOL_OUTPUT_RUN_RUL="${BASE_URL}/threads/${THREAD_ID}/runs/${RUN_ID}/submit_tool_outputs"
            TOOL_OUTPUTS_DATA=$(< <(cat <<EOF
    {
      "tool_outputs": $TOOL_OUTPUTS
    }
    EOF
    ))
            curl -s --location "${SUBMIT_TOOL_OUTPUT_RUN_RUL}" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}" \
    --data "${TOOL_OUTPUTS_DATA}" > /dev/null
            RESP=`curl -s --location --request GET "${RUN_URL}/$RUN_ID" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}"`
            RUN_STAUS=`echo $RESP| jq .status | tr -d '"'`;
            sleep 1
        done
        sleep 1
    done
    #list msg
    RESPONSE_MSG=`curl -s --location --request GET "${MSG_URL}" \
    --header 'OpenAI-Beta: assistants=v2' \
    --header 'Content-Type: application/json' \
    --header "${AUTH_HEADER}" | jq .data[0].content[].text.value`
    
    echo ""
    echo "davinci bot: "$RESPONSE_MSG
  5. 即可看到結果如下

    davinci bot: "response from assistant"

Python

Text or image as Input

import json
from openai import OpenAI
from datetime import datetime

ASSISTANT_API = 'https://prod.dvcbot.net/api/assts/v1'
API_KEY = ''
client = OpenAI(
    base_url=ASSISTANT_API,
    api_key=API_KEY,
)
ASSISTANT_ID = ''

# 定義多個訊息
messages = [
    {"type": "text", "text": "tell me about the image"},
    {"type": "image_url", "image_url": {"url": "https://xxx.xxx.xxx.jpg"}},
    {"type": "text", "text": "What do you think about this image?"}
]

# 建立 thread
thread = client.beta.threads.create(messages=[])

# 連續發送訊息
for message in messages:
    client.beta.threads.messages.create(thread_id=thread.id, role='user', content=[message])

# 執行 assistant
run = client.beta.threads.runs.create_and_poll(thread_id=thread.id, assistant_id=ASSISTANT_ID, additional_instructions=f"\nThe current time is: {datetime.now()}")

while run.status == 'requires_action' and run.required_action:
    outputs = []
    for call in run.required_action.submit_tool_outputs.tool_calls:
        resp = client._client.post(ASSISTANT_API + '/pluginapi', params={"tid": thread.id, "aid": ASSISTANT_ID, "pid": call.function.name}, headers={"Authorization": "Bearer " + API_KEY}, json=json.loads(call.function.arguments))
        outputs.append({"tool_call_id": call.id, "output": resp.text[:8000]})
    run = client.beta.threads.runs.submit_tool_outputs_and_poll(run_id=run.id, thread_id=thread.id, tool_outputs=outputs)

if run.status == 'failed' and run.last_error:
    print(run.last_error.model_dump_json())

msgs = client.beta.threads.messages.list(thread_id=thread.id, order='desc')
client.beta.threads.delete(thread_id=thread.id)
print(msgs.data[0].content[0].text.value)

Text & image as input (Streaming)

import json
import uuid
import traceback

from openai import OpenAI
from typing_extensions import override
from openai import AssistantEventHandler, OpenAI
from openai.types.beta.threads import Text, TextDelta
from openai.types.beta.threads.runs import ToolCall, ToolCallDelta
from openai.types.beta.threads import Message, MessageDelta
from openai.types.beta.threads.runs import ToolCall, RunStep
from openai.types.beta import AssistantStreamEvent
 

ASSISTANT_API='https://prod.dvcbot.net/api/assts/v1'
API_KEY='PLACE YOUR API KEY HERE'
client = OpenAI(
    base_url=ASSISTANT_API,
    api_key=API_KEY,
)

class EventHandler(AssistantEventHandler):
   def __init__(self, thread_id, assistant_id):
       super().__init__()
       self.output = None
       self.tool_id = None
       self.thread_id = thread_id
       self.assistant_id = assistant_id
       self.run_id = None
       self.run_step = None
       self.function_name = ""
       self.arguments = ""
      
   @override
   def on_text_created(self, text) -> None:
       print(f"\nassistant on_text_created > ", end="", flush=True)

   @override
   def on_text_delta(self, delta, snapshot):
       # print(f"\nassistant on_text_delta > {delta.value}", end="", flush=True)
       print(f"{delta.value}")

   @override
   def on_end(self, ):
       print(f"\n end assistant > ",self.current_run_step_snapshot, end="", flush=True)

   @override
   def on_exception(self, exception: Exception) -> None:
       """Fired whenever an exception happens during streaming"""
       print(f"\nassistant > {exception}\n", end="", flush=True)

   @override
   def on_message_created(self, message: Message) -> None:
       print(f"\nassistant on_message_created > {message}\n", end="", flush=True)

   @override
   def on_message_done(self, message: Message) -> None:
       print(f"\nassistant on_message_done > {message}\n", end="", flush=True)

   @override
   def on_message_delta(self, delta: MessageDelta, snapshot: Message) -> None:
       # print(f"\nassistant on_message_delta > {delta}\n", end="", flush=True)
       pass

   def on_tool_call_created(self, tool_call):
       # 4
       print(f"\nassistant on_tool_call_created > {tool_call}")
       self.function_name = tool_call.function.name       
       self.tool_id = tool_call.id
       print(f"\on_tool_call_created > run_step.status > {self.run_step.status}")
      
       print(f"\nassistant > {tool_call.type} {self.function_name}\n", flush=True)

       keep_retrieving_run = client.beta.threads.runs.retrieve(
           thread_id=self.thread_id,
           run_id=self.run_id
       )

       while keep_retrieving_run.status in ["queued", "in_progress"]: 
           keep_retrieving_run = client.beta.threads.runs.retrieve(
               thread_id=self.thread_id,
               run_id=self.run_id
           )
          
           print(f"\nSTATUS: {keep_retrieving_run.status}")      
      
   @override
   def on_tool_call_done(self, tool_call: ToolCall) -> None:       
       keep_retrieving_run = client.beta.threads.runs.retrieve(
           thread_id=self.thread_id,
           run_id=self.run_id
       )
      
       print(f"\nDONE STATUS: {keep_retrieving_run.status}")
      
       if keep_retrieving_run.status == "completed":
           all_messages = client.beta.threads.messages.list(
               thread_id=current_thread.id
           )

           print(all_messages.data[0].content[0].text.value, "", "")
           return
      
       elif keep_retrieving_run.status == "requires_action":
           print("here you would call your function")

           if self.function_name == "SEARCH":

               # ====
                outputs = []
                for call in keep_retrieving_run.required_action.submit_tool_outputs.tool_calls:
                    resp = client._client.post(ASSISTANT_API+'/pluginapi', params={"tid": self.thread_id, "aid": asst_id, "pid": call.function.name}, headers={"Authorization": "Bearer " + API_KEY}, json=json.loads(call.function.arguments))
                    outputs.append({"tool_call_id": call.id, "output": resp.text[:8000]})
                self.output=outputs
              
                with client.beta.threads.runs.submit_tool_outputs_stream(
                    thread_id=self.thread_id,
                    run_id=self.run_id,
                    tool_outputs=self.output,
                    event_handler=EventHandler(self.thread_id, self.assistant_id)
                ) as stream:
                    stream.until_done()                       
           else:
               print("unknown function")
               return
      
   @override
   def on_run_step_created(self, run_step: RunStep) -> None:
       # 2       
       print(f"on_run_step_created")
       self.run_id = run_step.run_id
       self.run_step = run_step
       print("The type ofrun_step run step is ", type(run_step), flush=True)
       print(f"\n run step created assistant > {run_step}\n", flush=True)

   @override
   def on_run_step_done(self, run_step: RunStep) -> None:
       print(f"\n run step done assistant > {run_step}\n", flush=True)

   def on_tool_call_delta(self, delta, snapshot): 
       if delta.type == 'function':
           # the arguments stream thorugh here and then you get the requires action event
           print(delta.function.arguments, end="", flush=True)
           self.arguments += delta.function.arguments
       elif delta.type == 'code_interpreter':
           print(f"on_tool_call_delta > code_interpreter")
           if delta.code_interpreter.input:
               print(delta.code_interpreter.input, end="", flush=True)
           if delta.code_interpreter.outputs:
               print(f"\n\noutput >", flush=True)
               for output in delta.code_interpreter.outputs:
                   if output.type == "logs":
                       print(f"\n{output.logs}", flush=True)
       else:
           print("ELSE")
           print(delta, end="", flush=True)

   @override
   def on_event(self, event: AssistantStreamEvent) -> None:
       # print("In on_event of event is ", event.event, flush=True)

       if event.event == "thread.run.requires_action":
           print("\nthread.run.requires_action > submit tool call")
           print(f"ARGS: {self.arguments}")
 



assistant = client.beta.assistants.create(
    name='test example',
    model='aide-gpt-4-turbo',
    instructions="you are an assistant that will answer my question",
    tools=[
        {
            "type": "function",
             "function": {
                "name": "SEARCH",
                "description": "Search more knowledge or realtime information from the Internet to answer the user.",
                "parameters": {
                    "type": "object",
                    "properties": {
                    "query": {
                        "type": "object",
                        "properties": {
                        "q": {
                            "type": "string",
                            "description": "Query string to be searched for on the search engine. This should be infered from the user's question and the conversation. Please split the original user query completely into more than 5 closely related important keywords, which are devided by `space key` for searching. If searching site is specified by me, please gnerate it followed by a site:XXX.com"
                        },
                        "mkt": {
                            "type": "string",
                            "enum": [
                            "zh-TW",
                            "en-US"
                            ],
                            "description": "The market that should be searched for. It should be aligned with the languages the role `user` adopts. E.g. the language #zh-TW maps with the `zh-TW` mkt option."
                        }
                        },
                        "required": [
                            "q",
                            "mkt"
                        ]
                    },
                    "topk": {
                        "type": "string",
                        "description": "The number of search results, it should be set as a single integer number between 1~5."
                    }
                    },
                    "required": [
                    "query",
                    "topk"
                    ]
                }
            }
        }
    ],
    metadata={
        'backend_id': 'default'
    }
)
asst_id=assistant.id
print(f"\nassistant created, id: ", asst_id, flush=True)

new_thread = client.beta.threads.create()
prompt = "中國在 2024 巴黎奧運的表現如何?"
client.beta.threads.messages.create(thread_id=new_thread.id, role="user", content=prompt)

with client.beta.threads.runs.create_and_stream(
    thread_id=new_thread.id,
    assistant_id=asst_id,
    instructions=prompt,
    event_handler=EventHandler(new_thread.id, asst_id),
) as stream:
    stream.until_done()

Voice mode

(Coming Soon)

  • 無標籤