%pip install tablegpt-agent
TableGPT Agent depends on pybox to manage code execution environment. By default, pybox operates in an in-cluster mode. If you intend to run tablegpt-agent in a local environment, install the optional dependency as follows:
%pip install tablegpt-agent[local]
This tutorial uses langchain-openai
for the chat model instance. Please make sure you have it installed:
%pip install langchain-openai
Setup the LLM Service¶
Before using TableGPT Agent, ensure you have an OpenAI-compatible server configured to host TableGPT2. We recommend using vllm for this:
python -m vllm.entrypoints.openai.api_server --served-model-name TableGPT2-7B --model path/to/weights
NOTES:
- To analyze tabular data with
tablegpt-agent
, make sureTableGPT2
is served withvllm
version 0.5.5 or higher.- For production environments, it's important to optimize the vllm server configuration. For details, refer to the vllm documentation on server configuration.
Create TableGPT Agent¶
NOTE: TableGPT Agent fully supports aync invocation. If you are running this tutorial in a Jupyter Notebook, no additional setup is required. However, if you plan to run the tutorial in a Python console, make sure to use a console that supports asynchronous operations. To get started, execute the following command:
python -m asyncio
In the console or notebook, create the agent as follows:
from langchain_openai import ChatOpenAI
from pybox import AsyncLocalPyBoxManager
from tablegpt.agent import create_tablegpt_graph
llm = ChatOpenAI(openai_api_base="YOUR_VLLM_URL", openai_api_key="whatever", model_name="TableGPT2-7B")
pybox_manager = AsyncLocalPyBoxManager()
agent = create_tablegpt_graph(
llm=llm,
pybox_manager=pybox_manager,
)
Start Chatting¶
from datetime import date
from langchain_core.messages import HumanMessage
message = HumanMessage(content="Hi")
_input = {
"messages": [message],
"parent_id": "some-parent-id",
"date": date.today(),
}
state = await agent.ainvoke(_input)
state["messages"]
[HumanMessage(content='Hi', additional_kwargs={}, response_metadata={}, id='34fe748c-81ab-49ea-bec6-9c621598a61a'), AIMessage(content="Hello! How can I assist you with data analysis today? Please let me know the details of the dataset you're working with and what specific analysis you'd like to perform.", additional_kwargs={'parent_id': 'some-parent-id'}, response_metadata={}, id='a1ee29d2-723e-41c7-b420-27d0cfaed5dc')]
You can get more detailed outputs with the astream_events
method:
async for event in agent.astream_events(
input=_input,
version="v2",
):
# We ignore irrelevant events here.
if event["event"] == "on_chat_model_end":
print(event["data"]["output"])
content='Hello! How can I assist you with your data analysis today? Please let me know what dataset you are working with and what specific analyses or visualizations you would like to perform.' additional_kwargs={} response_metadata={'finish_reason': 'stop', 'model_name': 'TableGPT2-7B'} id='run-525eb149-0e3f-4b04-868b-708295f789ac'